As US educators embrace AI in the classroom, firms are selling software to flag mentions of self-harm, raising concerns over privacy and control.

https://www.bloomberg.com/news/articles/2025-11-07/ai-chatbot-surveillance-tools-are-quietly-watching-kids-in-class

2 Comments

  1. *Janne Knodler for Bloomberg News*

    Ahead of the new school year, a handful of tech companies issued grim warnings to American educators embracing artificial intelligence in the classroom: Chatbots, they said, could endanger students and lead to self-harm. Vigilance was paramount. “The risks of students using AI can literally be deadly,” one company cautioned. Another noted: “Student lives depend on it.”

    The “it” is the software those companies are selling — tools that themselves use AI to scan students’ conversations with chatbots and alert adults to potential danger. Across the US, teachers and administrators are increasingly turning to companies such as GoGuardian and Lightspeed Systems for real-time monitoring of student-bot conversations, according to interviews with more than a dozen educators. The goal is to catch early warning signs of severe outcomes, including teen suicides.

    “I sleep better knowing that we have this tool for our students,” says Ian Haight, director of technology systems and services for Kalamazoo Public Schools in Michigan, which uses one such system. Thomas Gavin, an ed-tech supervisor for a school district in Delaware, says AI companies’ built-in safety tools may fall short for vulnerable students. That’s why his district relies on monitoring software, “to protect them as much as we can.”

    [Read the full dispatch here.](https://www.bloomberg.com/news/articles/2025-11-07/ai-chatbot-surveillance-tools-are-quietly-watching-kids-in-class?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc2MjUyMTU3NywiZXhwIjoxNzYzMTI2Mzc3LCJhcnRpY2xlSWQiOiJUNUNYRzlHUTdMQzMwMCIsImJjb25uZWN0SWQiOiJEMzU0MUJFQjhBQUY0QkUwQkFBOUQzNkI3QjlCRjI4OCJ9.sPC5GH0f_UaSGBHTAeoQyQoTTnvQK5BSPo9BRKoRoWE)

  2. The fears of AI killing our kids is hysterical and no such monitoring and control is needed. Is a Chatbot’s advice seriously considered to be more pernicious than a student’s teenage friends? The Chatbot might, if prompted to achieve that goal, be guided into confirming a student’s particular biases, but is that really more dangerous than a student’s peer cliques becoming an echo chamber? The real solution is for parents to be involved in their children’s lives, to observe the signs themselves, and engage in meaningful conversation with their children. This being said, it is also incumbent upon AI designers to ensure its products have guard rails built into the programming to restrict certain responses that could be harmful. But this is more associated with prudent neutrality that direct observation and control. As more local and central governments implement AI restrictions in this regard it will help as well.