“If I build a car that is far more dangerous than other cars, don’t do any safety testing, release it, and it ultimately leads to people getting killed, I will probably be held liable and have to pay damages, if not criminal penalties.
If I build a search engine that (unlike Google) has as the first result for “how can I commit a mass murder” detailed instructions on how best to carry out a spree killing, and someone uses my search engine and follows the instructions, I likely won’t be held liable, thanks largely to Section 230 of the Communications Decency Act of 1996.
So here’s a question: Is an AI assistant more like a car, where we can expect manufacturers to do safety testing or be liable if they get people killed? Or is it more like a search engine?
This is one of the questions animating the current raging discourse in tech over [California’s SB 1047](https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047), legislation in the works that mandates that companies that spend more than $100 million on training a “frontier model” in AI — like the in-progress GPT-5 — do safety testing. Otherwise, they would be liable if their AI system leads to a “mass casualty event” or more than $500 million in damages in a single incident or set of closely linked incidents.
Just reading about the energy AI needs, another bill that should make us all concerned. Its not as bad as bitcoin mining but the exponential growth of AI is making it very serious.
ttkciar on
It’s not big tech that is upset about this bill — it’s startups and the open source community.
This bill imposes requirements which pose no problems to larger companies like Google or OpenAI, but are prohibitively burdensome to smaller companies which might compete with them.
It also stipulates that AI implementations must only run on infrastructure (like servers in datacenters) under the control of the authors. Again, this is not a problem for Google or OpenAI, who already operate in datacenters and do not seek to ever release their models’ weights to the public.
The criteria the bill poses for “covered models” include not only models which are trained on vast amounts of compute resources, but also **any future AI implementation with capabilities similar to such models,** which means the open source community will also be subject to these regulations as better architectures become available, illegalizing the sharing of implementations of those architectures (due to the bill’s stipulations about running on infrastructure under the author’s control).
Sounds good, but Newsom will veto it. He’s in the pocket of big tech.
magvadis on
Another Big Money bill that’s specifically designed to not target the first movers and in turn create a monopoly.
Classic Government. Too scared to bite the hand that feeds.
Google needs to take the cost, OpenAI needs to pay the people it steals from, aka, not exist because it’s algorithm is just a cheap copy in order to undermine copyright laws.
hawkwings on
Are random people allowed to sue? If so, it is a bad law. There should be some safety checks, but how many safety checks? The legal system could be tied up with how much is enough. Suppose an AI system kills 100 people and saves 1000 lives. Should we prevent it from existing? If the law prevents innovation, that’s a problem.
SpaceshipEarth10 on
AI is not the problem. The real problem is that AI is operating within an obsolete financial system. AI works best when as much clean data as possible is collected. The current financial markets rely heavily on shrewdness and toxic competition. A simple fix is to switch from the shareholder theory of business practices to stakeholder theory. The former has transformed to only caring about making money. The latter is all about taking care of the entire business model. That means, people can be paid periodically for their contribution to AI and LLM’s. I mean businesses already take your data via through theft by deception and use it to generate money. Why not pay the user? 🙂
7 Comments
“If I build a car that is far more dangerous than other cars, don’t do any safety testing, release it, and it ultimately leads to people getting killed, I will probably be held liable and have to pay damages, if not criminal penalties.
If I build a search engine that (unlike Google) has as the first result for “how can I commit a mass murder” detailed instructions on how best to carry out a spree killing, and someone uses my search engine and follows the instructions, I likely won’t be held liable, thanks largely to Section 230 of the Communications Decency Act of 1996.
So here’s a question: Is an AI assistant more like a car, where we can expect manufacturers to do safety testing or be liable if they get people killed? Or is it more like a search engine?
This is one of the questions animating the current raging discourse in tech over [California’s SB 1047](https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047), legislation in the works that mandates that companies that spend more than $100 million on training a “frontier model” in AI — like the in-progress GPT-5 — do safety testing. Otherwise, they would be liable if their AI system leads to a “mass casualty event” or more than $500 million in damages in a single incident or set of closely linked incidents.
The general concept that AI developers should be liable for the harms of the technology they are creating is [overwhelmingly popular](https://theaipi.org/poll-shows-voters-oppose-open-sourcing-ai-models-support-regulatory-representation-on-boards-and-say-ai-risks-outweigh-benefits-2/#:~:text=Among%20the%20findings%3A,that%20believe%20they%20should%20not) with the American public, and an earlier version of the bill — which was much more stringent — [passed the California state senate 32-1](https://sd11.senate.ca.gov/news/bipartisan-vote-senate-passes-senator-wieners-landmark-ai-safety-and-innovation-bill). It has [endorsements](https://www.wate.com/business/press-releases/ein-presswire/714330132/california-senate-bill-sb-1047-another-stifling-blow-to-silicon-valley-ai-startups/) from Geoffrey Hinton and Yoshua Bengio, two of the [most-cited AI researchers in the world](https://analyticsindiamag.com/meet-the-worlds-most-cited-deep-learning-researchers-whose-innovations-are-transforming-the-industry/).”
Just reading about the energy AI needs, another bill that should make us all concerned. Its not as bad as bitcoin mining but the exponential growth of AI is making it very serious.
It’s not big tech that is upset about this bill — it’s startups and the open source community.
This bill imposes requirements which pose no problems to larger companies like Google or OpenAI, but are prohibitively burdensome to smaller companies which might compete with them.
It also stipulates that AI implementations must only run on infrastructure (like servers in datacenters) under the control of the authors. Again, this is not a problem for Google or OpenAI, who already operate in datacenters and do not seek to ever release their models’ weights to the public.
The criteria the bill poses for “covered models” include not only models which are trained on vast amounts of compute resources, but also **any future AI implementation with capabilities similar to such models,** which means the open source community will also be subject to these regulations as better architectures become available, illegalizing the sharing of implementations of those architectures (due to the bill’s stipulations about running on infrastructure under the author’s control).
The effect of this bill is to give Big AI a legal “moat” against smaller competitors and against disruption from the open source community, per the (in)famous [“We Have No Moat” memo,](https://www.semianalysis.com/p/google-we-have-no-moat-and-neither) essentially a coup of [regulatory capture.](https://wikipedia.org/wiki/Regulatory_capture)
Sounds good, but Newsom will veto it. He’s in the pocket of big tech.
Another Big Money bill that’s specifically designed to not target the first movers and in turn create a monopoly.
Classic Government. Too scared to bite the hand that feeds.
Google needs to take the cost, OpenAI needs to pay the people it steals from, aka, not exist because it’s algorithm is just a cheap copy in order to undermine copyright laws.
Are random people allowed to sue? If so, it is a bad law. There should be some safety checks, but how many safety checks? The legal system could be tied up with how much is enough. Suppose an AI system kills 100 people and saves 1000 lives. Should we prevent it from existing? If the law prevents innovation, that’s a problem.
AI is not the problem. The real problem is that AI is operating within an obsolete financial system. AI works best when as much clean data as possible is collected. The current financial markets rely heavily on shrewdness and toxic competition. A simple fix is to switch from the shareholder theory of business practices to stakeholder theory. The former has transformed to only caring about making money. The latter is all about taking care of the entire business model. That means, people can be paid periodically for their contribution to AI and LLM’s. I mean businesses already take your data via through theft by deception and use it to generate money. Why not pay the user? 🙂