8 Comments

  1. Regulating AI is going to be close to impossible. It is good that people are trying to though. I think the genie is out of the bottle.

  2. positive move but we got ai companies popping up every day how will they control an ever adapting AI world

  3. What this really means, in practice, is that companies just have to create CYA policies. But very little will change, because that’s SOP, and enforcement is difficult, if not non-existent.

    > The bill will require large AI developers to publish information about their safety protocols and report safety incidents to the state within 72 hours.

  4. This is such a dark fucking joke & was literally gutted by lobbying. 

    What it does:

    The RAISE Act requires developers of the industry’s most advanced AI tools, called frontier models, to report critical safety incidents within 72 hours and imposes a $1 million penalty for the first violation and $3 million for subsequent ones.

    Not the best… but hey at least it does something. Until you read what’s defined as harm:

    “Critical harm” means the death or serious injury of one hundred or more  people  or  at  least  one billion dollars of damages to rights in money or property caused or materially enabled by  a  large  developer’s use,  storage,  or  release  of  a frontier model, through either of the following:

    (a) The creation or use of a chemical,  biological,  radiological,  or nuclear weapon; or
    (b)  An  artificial  intelligence  model engaging in conduct that does both of the following:
    (i) Acts with no meaningful human intervention; and
    (ii) Would, if committed by a human, constitute a crime  specified  in the  penal  law that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.

    In other words up to a $1 million fine if AI does $1 billion dollars of damage (the first time) and they don’t report it. $999 million dollars of damage, and they don’t report it? $0. 

    Hypothetically if Microsoft put co-pilot in charge of a factory that created and exploded a bomb… they’d be better off not reporting it, as it’s only a $1 million dollar fine. 

  5. Mamdani should have signed it, and Trump would have thought that a sexy move and given it a 600, 700, 800% approval

  6. Did any of you take the time to read the act? Compliance for large developers will be standardized and the same HIPAA / SOC2 compliance audit specialists will offer this service to the large firms. None of the points in the act are particularly egregious or problematic for any AI company not doing something shady, and safeguarding frontier models from misuse seems like a pretty good idea… If you oppose this blindly or believe AI regulation is too difficult, maybe take your political doomerism somewhere else.