“The Cyberspace Administration of China released the proposed regulations on Saturday, targeting AI products that form emotional connections with users via text, audio, video, or images. The draft requires service providers to actively monitor users’ emotional states and intervene when signs of addiction or “extreme emotions” appear.
Under the proposal, AI providers would assume safety responsibilities throughout the product life cycle, including establishing systems for algorithm review and data security.
A key component of the draft is a requirement to warn users against excessive use. Platforms would need to remind users they are interacting with an AI system upon logging in and at two-hour intervals — or sooner if the system detects signs of overdependence, Reuters reports.
If users exhibit addictive behavior, providers are expected to take necessary measures to intervene. The draft also reinforces content red lines, stating that services must not generate content that endangers national security, spreads rumors, or promotes violence or obscenity.”
icebergslim3000 on
People aren’t making babies if they’re in love with their phone
Pepethenormie on
China once again being competent in a way America will never be able to handle
3 Comments
“The Cyberspace Administration of China released the proposed regulations on Saturday, targeting AI products that form emotional connections with users via text, audio, video, or images. The draft requires service providers to actively monitor users’ emotional states and intervene when signs of addiction or “extreme emotions” appear.
Under the proposal, AI providers would assume safety responsibilities throughout the product life cycle, including establishing systems for algorithm review and data security.
A key component of the draft is a requirement to warn users against excessive use. Platforms would need to remind users they are interacting with an AI system upon logging in and at two-hour intervals — or sooner if the system detects signs of overdependence, Reuters reports.
If users exhibit addictive behavior, providers are expected to take necessary measures to intervene. The draft also reinforces content red lines, stating that services must not generate content that endangers national security, spreads rumors, or promotes violence or obscenity.”
People aren’t making babies if they’re in love with their phone
China once again being competent in a way America will never be able to handle