Scientific Trading

Scientific Trading

Share this post

Scientific Trading
Scientific Trading
We Wanted Responsible AI. They Gave Us The NY RAISE Act.

We Wanted Responsible AI. They Gave Us The NY RAISE Act.

The NY RAISE Act regulates the development of the most powerful AI "frontier models", the ones requiring more than $100 million in computational resources to train. But does it enable responsible AI?

J Faleiro's avatar
J Faleiro
Jun 14, 2025
∙ Paid

Share this post

Scientific Trading
Scientific Trading
We Wanted Responsible AI. They Gave Us The NY RAISE Act.
Share

I am an engineer, not a lawyer. The perspectives and analysis shared in this article reflect my technical understanding and personal research, not formal legal advice. If you have questions about how the RAISE Act or any AI-related regulations may apply to your specific situation, I strongly encourage you to consult with a qualified attorney. Professional legal counsel is essential for making informed decisions.

The New York State RAISE Act (Responsible AI Safety and Education Act), passed in 12 of June 2025, is a landmark law designed to regulate the development and deployment of the most powerful artificial intelligence systems—specifically, the so called “frontier models” that require more than $100 million in computational resources to train.

Its primary aim is to protect the public from catastrophic risks posed by advanced AI, balancing its protections with the need for ongoing innovation and transparency.

Scientific Trading is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

The RAISE Act represents America’s first major legislative effort to regulate frontier AI models. While pioneering in its focus on catastrophic risk prevention, the law reveals significant gaps in privacy preservation, human rights safeguards, and ethical enforcement.

The pursue of pragmatic and responsible AI will certainly go through a lengthy and convoluted path of regulations. NY RAISE Act is just an incomplete beginning.

In this article, we will go over its components and limitations, and more importantly, assess its impact on you, whether you are an AI practitioner, an enthusiast, an entrepreneur, or a NYS resident.

Nuts and Bolts

Definitions

The RAISE Act applies specifically to the so-called “large developers”. A “large developer” is a juridical person that has trained at least one “frontier model.” A “frontier model” is defined as an AI model that is either:

  1. Trained on more than 10^{26} FLOPS and costing more than $100 million in compute for training, or;

  2. Produced by applying “knowledge distillation” to another frontier model and costing more than $5 million in compute.

The bill’s obligations apply to “large developers” of frontier models, i.e., juridical persons that have trained at least one frontier model and have spent more than $100 million in aggregate compute costs to train frontier models, or have applied knowledge distillation to a frontier model.

The original definition of knowledge distillation in deep learning comes from the 2015 paper by Geoffrey Hinton, Oriol Vinyals, and Jeff Dean titled "Distilling the Knowledge in a Neural Network."

Keep reading with a 7-day free trial

Subscribe to Scientific Trading to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 J Faleiro
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share