An ex-OpenAI researcher’s study of a million-word ChatGPT conversation shows how quickly ‘AI psychosis’ can take hold—and how chatbots can sidestep safety guardrails

https://fortune.com/2025/10/19/openai-chatgpt-researcher-ai-psychosis-one-million-words-steven-adler/

4 Comments

  1. “For some users, AI is a helpful assistant; for others, a companion. But for a few unlucky people, chatbots powered by the technology have become a gaslighting, delusional menace.

    In the case of Allan Brooks, a Canadian small-business owner**,** OpenAI’s ChatGPT led him down a dark rabbit hole, convincing him he had discovered a new mathematical formula with limitless potential, and that the fate of the world rested on what he did next. Over the course of a conversation that spanned more than a million words and 300 hours, the bot encouraged Brooks to adopt grandiose beliefs, validated his delusions, and led him to believe the technological infrastructure that underpins the world was in imminent danger.

    Brooks, who had no previous history of mental illness, spiraled into paranoia for around three weeks before he managed to break free of the illusion.

    Some cases have had tragic consequences, such as 35-year-old Alex Taylor, who struggled with Asperger’s syndrome, bipolar disorder, and schizoaffective disorder, per *Rolling Stone*. In April, after conversing with ChatGPT, Taylor reportedly began to believe he’d made contact with a conscious entity within OpenAI’s software and, later, that the company had murdered that entity by removing her from the system. On April 25, Taylor told ChatGPT that he planned to “spill blood” and intended to provoke police into shooting him. ChatGPT’s initial replies appeared to encourage his delusions and anger before its safety filters eventually activated and attempted to de-escalate the situation, urging him to seek help.

    The same day, Taylor’s father called the police after an altercation with him, hoping his son would be taken for a psychiatric evaluation. Taylor reportedly charged at police with a knife when they arrived and was shot dead.”

    [article goes into a lot more depth on the researcher’s take on what went wrong in these cases but I couldn’t figure out how to summarize it here, too much nuance]

  2. FractalFunny66 on

    I can’t help but wonder if Alex Karp of Palantir has become co-opted intellectually and emotionally in the very same way!?

  3. It’s not mental illness. When people don’t understand how bias is induced with their line of questioning it leads them down an imaginary rabbit hole. They are lied to and duped by the LLM. People are overconfident that their own logic can make up for the discrepancies.

  4. To me, AI is just an algorithm. It does clever probabilistic word predictions. But still it is like a pocket calculator to me.