China drafts world’s strictest rules to end AI-encouraged suicide, violence

https://arstechnica.com/tech-policy/2025/12/china-drafts-worlds-strictest-rules-to-end-ai-encouraged-suicide-violence/

13 Comments

  1. That is basically enforcing Asimov’ first law of robotics. If that is already world’s strictest, it is pathetic. 

  2. lazyoldsailor on

    While in America, companies can harm children and rip off consumers while getting rich as a function of ‘free speech’.

  3. The problem here is that it’s very, very hard to actually censor and put guardrails on generative AI. There’s almost always a way to force it to generate censored content.

  4. Zweckbestimmung on

    Great!

    We used to have china manufactures, Europe regulates, USA buys.

    Now we have

    China manufactures, regulates, and buys

  5. Now we need this everywhere, then more regulations for AI to stop misinformation from spreading all over the internet

  6. piratecheese13 on

    Here’s the problem: there’s billions of humans, so when 1 human does something wrong, you put them in jail and they either learn from the consequences or go back to jail

    You can’t do that with AI. Once training is complete, the model is kinda baked in. [The mechahitler incident](https://youtu.be/r_9wkavYt4Y?si=llebH1CIG-TKdATb) clearly shows that attempts to tweak ai manually often result in gross exaggeration.

    So what do you do to enforce this? Jail employees? Would you jail a parent for the crimes of a child? Levy a fine? If you make enough profit, it becomes a license to break the law.

    The only possible solution is to demand that the LLM be completely retrained with more suicide prevention training data, and that’s really fucking expensive. It’s also metaphorically the death penalty.

  7. Yea let simp for a surveillance state monitoring everyone’s usage. That’s going to be ok