
California bill would make AI companies remind kids that chatbots aren’t people | The bill is meant to protect kids from the ‘addictive, isolating, and influential aspects’ of AI.
https://www.theverge.com/news/605728/california-chatbot-bill-child-safety

4 Comments
From the article: A new bill proposed in California (SB 243) [would require](https://legiscan.com/CA/text/SB243/2025) AI companies to periodically remind kids that a chatbot is an AI and not human. The bill, proposed by California Senator Steve Padilla, is meant to protect children from the “addictive, isolating, and influential aspects” of AI.
In addition to limiting companies from using “addictive engagement patterns,” the bill would require AI companies to provide annual reports to the State Department of Health Care Services outlining how many times it detected suicidal ideation by kids using the platform, as well as the number of times a chatbot brought up the topic. It would also make companies tell users that their chatbots might not be appropriate for some kids.
Last year, a parent filed a wrongful death lawsuit against Character.AI, alleging its custom AI chatbots are “unreasonably dangerous” after her teen, who continuously chatted with the bots, died by suicide. Another lawsuit accused the company of sending “harmful material” to teens. Character.AI later announced that it’s working on parental controls and developed a new AI model for teen users that will block “sensitive or suggestive” output.
“Our children are not lab rats for tech companies to experiment on at the cost of their mental health,” Senator Padilla said in the press release. “We need common sense protections for chatbot users to prevent developers from employing strategies that they know to be addictive and predatory.”
Social media should also disclose this, Facebook is fucked I bet like half of the posts are ai.
The ELIZA experiment back in the ’60s showed that people don’t care if you tell them that the thing they’re interacting with is not a person. They’ll get emotionally attached just because something is engaging in what feels like a meaningful conversation.
ChatGPT has started doing weird things lately, in my experience. Speaking more colloquially, even with straightforward problem solving questions, such as by beginning explanations with “Yeah.” And then seeking my opinions or the outcomes of my personal inquiries, like somebody trained to be engaging.