> OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
>The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
>**The bill, SB 3444, would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website**. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta.
>…
> Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
I’ve seen some bad AI bills before, but this one might just take the cake. Complying with federal standards and not acting recklessly does *not* shield companies from liability under normal circumstances—drugs, cars, consumer products, none of them get exemptions like this.
I sincerely hope that lawmakers are sane enough to not let this pass.
thegooddoktorjones on
A law that does absolutely nothing for the vast majority of citizens. Pure corrupt graft.
imadij on
The issue with AI models is you can’t hold them accountable and companies don’t want to be liable for their product
Anim8nFool on
I’m sure they do
RandomUwUFace on
AI is becoming “too big to fail.” How does one fight back against this?
Spez_is-a-nazi on
Remember kids, corporations are all about privatizing gains and socializing losses. We are all on the hook for the environmental damage caused, the increased energy bills, the noise, the impact of the disinformation campaigns, all the different types of harms they cause. But those subscription revenues? They belong just to Sammy.
EndeLarsson on
In US this will pass with no problem.
Significant_You_2735 on
This is absolutely part of why some corporations want to use AI in the first place – escaping accountability for destructive and dangerous decisions in the pursuit of wealth at any cost. “We didn’t do that, IT did.”
bluestreakxp on
Ah I didn’t know skynet wanted indemnity and hold harmless arrangements
Sc0j on
This makes me think AI is likely to enable mass deaths or financial disasters. Can we stop that before the liability part?
Practical_Rip_953 on
I’m so glad to see the government heard the people’s concerns about AI and jumped in to address the real issues with AI /s
Capable-Student-413 on
So tired of Americans’ false surprise about this type of shit. It’s not news. Your country sucks and the world knows it.
Decades of school shootings every week and a pedophile President. Cops shooting children on camera, alcoholic supreme court justices….
But this injustice is the surprise?
AaronPseudonym on
Things you do before you kill many people, for 100, Alex?
Fair_Blood3176 on
NO WAY!! UNBELIEVABLE!
plan_with_stan on
soooo, AI Company – decides to release a model that among other things can create bio weapons for a terrorist organization, who would not normally have this capability. Terrorist org uses that and kills a lot of people, kills power grids and sets off mass casualty and chaos events … and the AI company can go “well…. we didn’t do that the terrorists did” and it will all be fine and dandy??
that’s just bullshit – there needs to be oversight and liability so they make sure their models don’t fuck around.
imagine Airbus decided to go the SpaceX route and just .. test their airplanes live, with passengers. a new wing design we dont know works? yeah put it on the plane from Amsterdam to Auckland.. lets see if it works.
Squibbles01 on
Everyday I hate AI more.
Dry_Jellyfish641 on
I can’t wait for Ted Cruz to defend this one
idrivehookers on
This is stupid.
WellSpreadMustard on
The oligarchy is going to use AI to do a big “whoopsie daisy, the AI killed a bunch of poor people”
ImportantDirt1796 on
Basically saying “we want to build powerful AI but don’t want to be responsible if it breaks things.”
Classic big corp play. “We will rule the world but if anything goes wrong it’s not our liability”
That’s not innovation, that’s just risk-shifting to everyone else.
FredFredrickson on
I’m assuming they backed it with a massive bribe, first.
7grims on
First they still from everyone and arent punished, now they also want to evade repercussions…
fuck ai, all the way is just shit and its making the world a worse place
throwaway110906 on
they’re fucking around so much i cannot wait for the absolute comeuppance the find out will be
pornborn on
Well, any support I had for AI just went right out the window. AI can fuck right off.
Can you imagine if self driving cars had that disclaimer? They would be banned immediately.
Oddball_bfi on
I agree with this in a weird way.
Put the liability squarely on the companies that deploy the AI platforms, not those that make them. If you replace employees with robots, then the business is directly responsible for the outcomes.
Maybe when the first few giants fall because their new magic money stick explodes then business will realise humans who can be blamed individually weren’t so bad after all.
TedTyro on
They’re really selling it.
ortrtaaitdbt2000 on
Why the fuck are we allowing this into our society?
Ballad_Bird_Lee on
Hell no, we bout to pull a T2 Skynet
pandaSmore on
Hmm I wonder why 🤔
PlanetTourist on
The leopards are making it legal for them to eat your face.
YearlyLemon8 on
Of course they would! Who would have thought.
FastFingersDude on
I’ve never went from loving to hating a company as fast as OpenAI. I guess AI does speed things up.
ThePickleConnoisseur on
AI companies want everyone to use AI but not be responsible for their software. Interesting how every sector has higher standards no matter how small
35 Comments
What?
Here’s an excerpt:
> OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
>The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
>**The bill, SB 3444, would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website**. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta.
>…
> Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
I’ve seen some bad AI bills before, but this one might just take the cake. Complying with federal standards and not acting recklessly does *not* shield companies from liability under normal circumstances—drugs, cars, consumer products, none of them get exemptions like this.
I sincerely hope that lawmakers are sane enough to not let this pass.
A law that does absolutely nothing for the vast majority of citizens. Pure corrupt graft.
The issue with AI models is you can’t hold them accountable and companies don’t want to be liable for their product
I’m sure they do
AI is becoming “too big to fail.” How does one fight back against this?
Remember kids, corporations are all about privatizing gains and socializing losses. We are all on the hook for the environmental damage caused, the increased energy bills, the noise, the impact of the disinformation campaigns, all the different types of harms they cause. But those subscription revenues? They belong just to Sammy.
In US this will pass with no problem.
This is absolutely part of why some corporations want to use AI in the first place – escaping accountability for destructive and dangerous decisions in the pursuit of wealth at any cost. “We didn’t do that, IT did.”
Ah I didn’t know skynet wanted indemnity and hold harmless arrangements
This makes me think AI is likely to enable mass deaths or financial disasters. Can we stop that before the liability part?
I’m so glad to see the government heard the people’s concerns about AI and jumped in to address the real issues with AI /s
So tired of Americans’ false surprise about this type of shit. It’s not news. Your country sucks and the world knows it.
Decades of school shootings every week and a pedophile President. Cops shooting children on camera, alcoholic supreme court justices….
But this injustice is the surprise?
Things you do before you kill many people, for 100, Alex?
NO WAY!! UNBELIEVABLE!
soooo, AI Company – decides to release a model that among other things can create bio weapons for a terrorist organization, who would not normally have this capability. Terrorist org uses that and kills a lot of people, kills power grids and sets off mass casualty and chaos events … and the AI company can go “well…. we didn’t do that the terrorists did” and it will all be fine and dandy??
that’s just bullshit – there needs to be oversight and liability so they make sure their models don’t fuck around.
imagine Airbus decided to go the SpaceX route and just .. test their airplanes live, with passengers. a new wing design we dont know works? yeah put it on the plane from Amsterdam to Auckland.. lets see if it works.
Everyday I hate AI more.
I can’t wait for Ted Cruz to defend this one
This is stupid.
The oligarchy is going to use AI to do a big “whoopsie daisy, the AI killed a bunch of poor people”
Basically saying “we want to build powerful AI but don’t want to be responsible if it breaks things.”
Classic big corp play. “We will rule the world but if anything goes wrong it’s not our liability”
That’s not innovation, that’s just risk-shifting to everyone else.
I’m assuming they backed it with a massive bribe, first.
First they still from everyone and arent punished, now they also want to evade repercussions…
fuck ai, all the way is just shit and its making the world a worse place
they’re fucking around so much i cannot wait for the absolute comeuppance the find out will be
Well, any support I had for AI just went right out the window. AI can fuck right off.
Can you imagine if self driving cars had that disclaimer? They would be banned immediately.
I agree with this in a weird way.
Put the liability squarely on the companies that deploy the AI platforms, not those that make them. If you replace employees with robots, then the business is directly responsible for the outcomes.
Maybe when the first few giants fall because their new magic money stick explodes then business will realise humans who can be blamed individually weren’t so bad after all.
They’re really selling it.
Why the fuck are we allowing this into our society?
Hell no, we bout to pull a T2 Skynet
Hmm I wonder why 🤔
The leopards are making it legal for them to eat your face.
Of course they would! Who would have thought.
I’ve never went from loving to hating a company as fast as OpenAI. I guess AI does speed things up.
AI companies want everyone to use AI but not be responsible for their software. Interesting how every sector has higher standards no matter how small
All profit and zero responsibility, must be nice…