Policy | 6/17/2025
New York Set to Implement Groundbreaking AI Safety Legislation
New York is poised to enact the Responsible AI Safety and Education (RAISE) Act, which would introduce unprecedented safety and transparency requirements for developers of powerful artificial intelligence models. The bill, which has passed both legislative chambers, awaits Governor Kathy Hochul's signature.
New York Set to Implement Groundbreaking AI Safety Legislation
New York is on the brink of enacting a significant piece of legislation aimed at enhancing safety and transparency in the development of powerful artificial intelligence (AI) models. The Responsible AI Safety and Education (RAISE) Act has successfully passed through both the state Senate and Assembly and is now awaiting the approval of Governor Kathy Hochul.
Key Provisions of the RAISE Act
If signed into law, the RAISE Act would require major AI companies to publish detailed safety protocols and report any security incidents. This legislation positions New York as a leader in AI regulation in the United States and could set a precedent for other states and the federal government.
The RAISE Act specifically targets what it defines as "frontier AI models," which are those developed by companies that have invested over $100 million in computational resources for training. This approach aims to regulate the most powerful AI systems, such as those created by industry leaders like OpenAI, Google, and Anthropic, while still allowing smaller startups to innovate without excessive burdens.
Proponents of the bill, including State Senator Andrew Gounardes and Assemblymember Alex Bores, argue that the advanced capabilities of these technologies necessitate commonsense safeguards to mitigate risks, such as the potential misuse of AI in creating biological weapons or conducting automated crimes.
Compliance and Reporting Requirements
The RAISE Act outlines several compliance requirements for large AI developers:
- Companies must create and publish a safety plan to address severe risks prior to deploying their models.
- These safety plans must be reviewed by qualified third parties to ensure their effectiveness.
- Any serious security incidents, such as theft of AI models or dangerous behavior by the models, must be reported to the New York Attorney General and the Division of Homeland Security and Emergency Services within 72 hours.
Failure to comply could result in civil penalties, with fines potentially reaching millions of dollars, calculated as a percentage of the model's training costs. Additionally, the bill includes protections for whistleblowers who report safety risks, shielding them from retaliation.
Industry Response and Criticism
Despite its legislative success, the RAISE Act has faced criticism from some sectors of the tech industry. Groups such as the Business Software Alliance (BSA) and the Software & Information Industry Association (SIIA) have expressed concerns that the legislation was rushed and relies on vague definitions. Critics argue that the requirement to publish safety protocols could inadvertently provide a roadmap for malicious actors to exploit vulnerabilities.
Furthermore, industry representatives contend that the bill unfairly holds developers accountable for the misuse of their models by others, which they argue is beyond their control. Some suggest that a national approach, rather than a state-by-state patchwork of regulations, would be more effective in governing AI technologies.
National Implications
The debate surrounding the RAISE Act reflects a broader national conversation about the governance of rapidly advancing AI technologies. This legislation draws some inspiration from a similar but ultimately vetoed bill in California, SB 1047, although it avoids some of the more controversial elements of its California counterpart.
Governor Hochul has shown a commitment to AI regulation in the past, having signed laws aimed at safeguarding AI companion systems and combating AI-generated child sexual abuse material. Her decision on the RAISE Act, due by the end of the year, is anticipated to have significant implications for the future of AI governance in the United States, potentially establishing New York as a leader in mandating transparency and safety in AI development.