OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/

35 Comments

  1. Here’s an excerpt:

    > OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.

    >The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.

    >**The bill, SB 3444, would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website**. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta.

    >…

    > Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.

    I’ve seen some bad AI bills before, but this one might just take the cake. Complying with federal standards and not acting recklessly does *not* shield companies from liability under normal circumstances—drugs, cars, consumer products, none of them get exemptions like this.

    I sincerely hope that lawmakers are sane enough to not let this pass.

  2. thegooddoktorjones on

    A law that does absolutely nothing for the vast majority of citizens. Pure corrupt graft.

  3. The issue with AI models is you can’t hold them accountable and companies don’t want to be liable for their product

  4. Spez_is-a-nazi on

    Remember kids, corporations are all about privatizing gains and socializing losses. We are all on the hook for the environmental damage caused, the increased energy bills, the noise, the impact of the disinformation campaigns, all the different types of harms they cause. But those subscription revenues? They belong just to Sammy. 

  5. Significant_You_2735 on

    This is absolutely part of why some corporations want to use AI in the first place – escaping accountability for destructive and dangerous decisions in the pursuit of wealth at any cost. “We didn’t do that, IT did.”

  6. This makes me think AI is likely to enable mass deaths or financial disasters. Can we stop that before the liability part?

  7. Practical_Rip_953 on

    I’m so glad to see the government heard the people’s concerns about AI and jumped in to address the real issues with AI /s

  8. Capable-Student-413 on

    So tired of Americans’ false surprise about this type of shit. It’s not news. Your country sucks and the world knows it.  
    Decades of school shootings every week and a pedophile President.  Cops shooting children on camera, alcoholic supreme court justices…. 

    But this injustice is the surprise?

  9. plan_with_stan on

    soooo, AI Company – decides to release a model that among other things can create bio weapons for a terrorist organization, who would not normally have this capability. Terrorist org uses that and kills a lot of people, kills power grids and sets off mass casualty and chaos events … and the AI company can go “well…. we didn’t do that the terrorists did” and it will all be fine and dandy??

    that’s just bullshit – there needs to be oversight and liability so they make sure their models don’t fuck around.

    imagine Airbus decided to go the SpaceX route and just .. test their airplanes live, with passengers. a new wing design we dont know works? yeah put it on the plane from Amsterdam to Auckland.. lets see if it works.

  10. WellSpreadMustard on

    The oligarchy is going to use AI to do a big “whoopsie daisy, the AI killed a bunch of poor people”

  11. ImportantDirt1796 on

    Basically saying “we want to build powerful AI but don’t want to be responsible if it breaks things.”

    Classic big corp play. “We will rule the world but if anything goes wrong it’s not our liability”

    That’s not innovation, that’s just risk-shifting to everyone else.

  12. First they still from everyone and arent punished, now they also want to evade repercussions…

    fuck ai, all the way is just shit and its making the world a worse place

  13. throwaway110906 on

    they’re fucking around so much i cannot wait for the absolute comeuppance the find out will be

  14. Well, any support I had for AI just went right out the window. AI can fuck right off.

    Can you imagine if self driving cars had that disclaimer? They would be banned immediately.

  15. I agree with this in a weird way.

    Put the liability squarely on the companies that deploy the AI platforms, not those that make them.  If you replace employees with robots, then the business is directly responsible for the outcomes.

    Maybe when the first few giants fall because their new magic money stick explodes then business will realise humans who can be blamed individually weren’t so bad after all.

  16. FastFingersDude on

    I’ve never went from loving to hating a company as fast as OpenAI. I guess AI does speed things up.

  17. ThePickleConnoisseur on

    AI companies want everyone to use AI but not be responsible for their software. Interesting how every sector has higher standards no matter how small