Share.

3 Comments

  1. Submission statement: Given how fast AI development is and how slow governments are, when is the right time to start setting up government safety systems for AI?

    It’ll always seem too early, when the AIs aren’t capable of great harms, or too late, when AIs have caused massive damage.

    I think that most people should be free to do whatever they want as long as they aren’t hurting non-consenting adults. But I think things like nukes and biological weapons should be regulated and not just anybody should be allowed to build them.

    How does this apply to things like AI, which will soon be powerful enough to build biological and nuclear weapons, but isn’t yet? Should we wait until it does? But is it too late then? Especially given how slowly the government moves?

  2. A relevant point that the author didn’t mention is that regulatory legislation in California is frequently used as a template later for federal legislation.

    The original version of this bill made some pretty ugly waves in the open source LLM community. It contained some language which would have badly hobbled us, not only developing new algorithms but also sharing model weights (the original language about the “kill switch” was draconian).

    Like Wiener said, though, after the bill passed its house of origin they amended it rather a lot, and excised the offending text. It’s quite benign for the open source community now.