First major study on ‘AI psychosis’ suggests chatbots can encourage delusions among vulnerable people. AI may validate or amplify delusional or grandiose content in users vulnerable to psychosis, but it is unclear whether they can result in de novo psychosis in absence of pre-existing vulnerability.

https://www.theguardian.com/technology/2026/mar/14/ai-chatbots-psychosis

5 Comments

  1. New study raises concerns about AI chatbots fueling delusional thinking

    First major study on ‘AI psychosis’ suggests chatbots can encourage delusions among vulnerable people

    A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people.

    A summary of existing evidence on artificial intelligence-induced psychosis was published last week in the Lancet Psychiatry, highlighting how chatbots can encourage delusional thinking – though possibly only in people who are already vulnerable to psychotic symptoms. The authors advocate for clinical testing of AI chatbots in conjunction with trained mental health professionals.

    For his paper, Dr Hamilton Morrin, a psychiatrist and researcher at King’s College in London, analyzed 20 media reports on so-called “AI psychosis”, which describes current theories as to how chatbots might induce or exacerbate delusions.

    “Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability,” he wrote.

    For those interested, here’s the link to the peer reviewed journal article:

    https://www.thelancet.com/journals/lanpsy/article/PIIS2215-0366(25)00396-7/abstract

  2. Having seen things that would trigger this in people while using myself I’m inclined to think someone needs to be predisposed for this sort of thing to have an effect.   

  3. Aren’t AI chatbots the ultimate yes-person? I doubt one will ever tell you what you need to hear but rather what it thinks you want to hear.