The RAISE act is a major piece of legislation aimed at frontier AI models, applying to companies that make >$500M in revenue and requiring them to report safety plans for preventing models from causing large-scale harms. Its supporters (including its sponsor, Alex Bores, who’s running for Congress) hope that it could serve as a starting point for a future federal standard for AI, and the fact that it passed despite opposition from the tech industry and the White House could have implications further down the road for how AI gets handled legally.
From the article:
> New York Gov. Kathy Hochul on Friday signed into law a new bill aimed at regulating artificial intelligence companies and requiring them to write, publish and follow safety plans.
>**Starting Jan. 1, 2027, any company with more than $500 million in revenue that develops a large AI system will have to publish and follow protocols aimed at preventing critical harm from the AI models and report any serious breaches or else face fines.**
>The law also establishes a new office within the New York State Department of Financial Services for enforcement, issuing rules and regulations, assessing fees, and publishing an annual report on AI safety. Some elements of the new law will simplify and codify existing best practices.
>The signing of the bill, known as the Responsible AI Safety and Education, or Raise, Act, comes a week after President Trump signed an executive order that aims to block states from regulating AI.
>“We rejected that,” said the bill’s sponsor, Alex Bores, assembly member for the 73rd District of New York in Manhattan. “While I agree this is best done at the federal level, we need to actually do it at the federal level. We can’t just be stopping people from taking action to protect their citizens.”
>He said the final version of the Raise Act used California’s SB53 bill, signed by Gov. Gavin Newsom earlier this year, as a starting point. But it is stricter in some areas, including the amount of time AI developers have to disclose safety incidents. The California law requires 15 days, while New York requires 72 hours.
>Bores said the disclosure timeline was one of the most contentious areas of the bill, adding that one AI lab emailed three hours before it was signed to ask for changes.
>“This was a real fight by extremely powerful interests to stop any movement in this space and to set the California law as the new ceiling, and that bubble was burst,” he said.
>Bores has announced plans to run in the open primary to replace retiring Rep. Jerrold Nadler (D., N.Y.) in Congress.
2 Comments
**Submission statement:**
The RAISE act is a major piece of legislation aimed at frontier AI models, applying to companies that make >$500M in revenue and requiring them to report safety plans for preventing models from causing large-scale harms. Its supporters (including its sponsor, Alex Bores, who’s running for Congress) hope that it could serve as a starting point for a future federal standard for AI, and the fact that it passed despite opposition from the tech industry and the White House could have implications further down the road for how AI gets handled legally.
From the article:
> New York Gov. Kathy Hochul on Friday signed into law a new bill aimed at regulating artificial intelligence companies and requiring them to write, publish and follow safety plans.
>**Starting Jan. 1, 2027, any company with more than $500 million in revenue that develops a large AI system will have to publish and follow protocols aimed at preventing critical harm from the AI models and report any serious breaches or else face fines.**
>The law also establishes a new office within the New York State Department of Financial Services for enforcement, issuing rules and regulations, assessing fees, and publishing an annual report on AI safety. Some elements of the new law will simplify and codify existing best practices.
>The signing of the bill, known as the Responsible AI Safety and Education, or Raise, Act, comes a week after President Trump signed an executive order that aims to block states from regulating AI.
>“We rejected that,” said the bill’s sponsor, Alex Bores, assembly member for the 73rd District of New York in Manhattan. “While I agree this is best done at the federal level, we need to actually do it at the federal level. We can’t just be stopping people from taking action to protect their citizens.”
>He said the final version of the Raise Act used California’s SB53 bill, signed by Gov. Gavin Newsom earlier this year, as a starting point. But it is stricter in some areas, including the amount of time AI developers have to disclose safety incidents. The California law requires 15 days, while New York requires 72 hours.
>Bores said the disclosure timeline was one of the most contentious areas of the bill, adding that one AI lab emailed three hours before it was signed to ask for changes.
>“This was a real fight by extremely powerful interests to stop any movement in this space and to set the California law as the new ceiling, and that bubble was burst,” he said.
>Bores has announced plans to run in the open primary to replace retiring Rep. Jerrold Nadler (D., N.Y.) in Congress.
The RAISE act is the second major frontier AI safety bill after CA bill SB 53, and it’s aimed at AI companies with >$500M in revenue. On a related note, the anti-AI regulation super PAC Leading the Future (Greg Brockman + a16z) stirred up controversy last month when they announced that the sponsor of the bill, Alex Bores, [would be its first target.](https://techcrunch.com/2025/11/17/a16z-backed-super-pac-is-targeting-alex-bores-sponsor-of-new-yorks-ai-safety-bill-he-says-bring-it-on/)
Well yeah I mean executive orders don’t bind state law. – they only apply to the executive branch of the federal government