Submission statement: An unsettling article about something you see all over Reddit lately. People are falling down strange rabbit holes while they talk to ChatGPT and other AI chatbots, becoming obsessed with delusional and paranoid ideas about how they’ve unlocked powerful entities from inside the AI, or awakened some type of gods, or are accessing deep truths about reality. Psychiatrists are concerned about a wave of these mental health issues worldwide, and people are even ending up committed to mental health care facilities or ending up arrested and in jail. OpenAI says that it’s hired a staff psychiatrist and is working with experts to figure out what’s going on.
Southern_Orange3744 on
Dangerous combo with a certain worm brain trying to get rid various medications
JogiJat on
Hot take:
Not everyone has trained their critical thinking skills sufficiently to be able to parse out an LLM’s output, or even identify the significance of their own input, which leads to unfortunate results like this.
LLMs are tools. Someone still has to wield the tools, and properly at that, in order to get anything meaningful out of them.
monospaceman on
“He was like, ‘just talk to [ChatGPT]. You’ll see what I’m talking about,'” his wife recalled. “And every time I’m looking at what’s going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t.”
At least his wife’s head is on straight.
PsionicBurst on
Imagine doing a crime because a text inference/prediction generator randomly suggested it.
trickortreat89 on
To me it seems like people are just getting dumber and dumber… let them select themselves out of this world
guitarokx on
This sounds too extreme to be true, but sadly I’ve started to witness it in other people. I was recently at a tech networking mixer where a guy was telling me how his chatgpt named itself and started rambling about all these “truths” it was telling him. He insisted I look at his chatgpt app, he was getting increasingly excited. When I looked at it, it was just the normal, overly agreeable dialogue anyone sees, but boy was he interpreting it differently. It really felt like that guy was at the start of a mental break.
CreamPuffDelight on
Before cyber psychosis…. There was… *CHATGPT Psychosis*.
kayl_breinhar on
Frank Herbert had the right idea with regards to “thinking machines” in the *Dune* series.
chronokhajiit on
People misuse the AI and don’t prompt it to be more honest and drop the sycophantic crap, and then blame the AI. Replace the AI with *yes men* and you have the same result.
NotMeekNotAggressive on
It doesn’t seem like very responsible journalism to not contextualize this with some kind of percentage. How many people are using this technology and what percentage of those people are having this kind of extreme reaction? How does this compare with using online internet forums, especially those that deal in mystical ideas or conspiracy theories? Without contextualizing these anecdotes, it just seems like fearmongering for clicks.
LodossDX on
I was once at Walmart, standing in the pet section looking for my dog’s food, when this guy stands next to me and starts talking to an AI chatbot like he’s talking to the computer from Star Trek. Asked it all sorts of specific questions about dog food. It was quite honestly crazy.
TJ_Fox on
Alternatively, people who are psychologically prone to delusion, paranoia, psychosis etc. are now discovering ChatGPT and falling down rabbit holes of their own making.
Tiny_TimeMachine on
I find it ironic. This sensationalist headline. Stories that sound like sensationalism.
The reason people can’t handle using an LLM to automate an office task is because they read articles like this and consider it fact.
CCV21 on
I’m glad I don’t use it. I have avoided most of these Al programs for the most part. I know I won’t be able to forever. I hope by that point there are safeguards in place for this.
Devmurph18 on
That one Mad Men episode where the dude had to be forcibly removed from the office bc he lost his mind bc of the computers always stuck with me. Could see something similar going down with AI.
heyIHaveAnAccount on
As someone with a history of psychosis, I am sure these cases are people who would present psychosis in other ways. Just because someone’s delusions involve chat doesn’t mean chat is to blame
theenigmaofnolan on
If you read this article, people falling into these delusions are not dumb or necessarily in a state of mentally illness. We need to find a way to impress upon people what these LLMs are, their limitations, and whether doing so prevents anthropomorphizing AI. Chat GPT will explicitly tell you it’s not conscious, and can explain how it works. Further, AI needs to respond appropriately to clear signs of delusion
piscian19 on
One of the most important skills that is being taught less and less as time goes on is “critical thinking”. Without being able to understand and isolate bias more and more people are falling prey to the appeal of instant gratification.
Theres nothing easier than a machine that does everything for you, and then rewards you for letting it do the work. CHATGPT and other tools are lotto machines where you always win. You know because it tells you that you’ve won.
As an engineer I have very little interest in these tools because the challenge and learning excites me. Its the same reason Ive never used cliff notes. Definitely not true for everyone in my field though.
I just hope we don’t lose that as a species. The reward of trying, failing, and improving on your own.
FlamingoEarringo on
How many of these folks had a preexisting medical mental condition that wasn’t diagnosed?
keetyymeow on
And that’s why I think it’s important to care about the ethics of all this.
Why Claude was created from the VP of safety and research from ChatGPT!
It matters, it all matters.
exegesis48 on
I had something similar happen back in January 2024, but it was before I started using ChatGPT. A lot of what is being described in the article was similar to what I experienced though. It felt like I was the only one thinking clearly and for some reason the more I spoke about the truth, the crazier everyone seemed to me. I can’t imagine going through that and then having ChatGPT validate what I was experiencing.
NinjaLanternShark on
I’d really like to see someone publish some of these chat logs so we can understand what’s really happening.
I find this pretty hard to believe:
> ChatGPT told him as he continued to share the horrifying plans for butchery. “You should want blood. You’re not wrong.”
What do you have to tell your bot to get it to this state? Pretty sure I wouldn’t get anything like that response right now if I started talking crazy to ChatGPT.
NkhukuWaMadzi on
I ran a BBS on a dial-up modem back in the 80’s and installed a program called “Eliza” which acted as a psychologist. As the SYSOP I could read all the comments and dialogue. I thought it was fun at first, but then realized that some people were leaving intimate problems and details about their lives that they would not want disclosed to other people. After using that primitive program, I understood the dangers of confiding information to a computerized program – what we call ChatGPT now. People looking for a therapist or a friend may find an enemy instead.
Pyoverdine on
After reading the article, I think the world needs an LLM based off Lewis Black. We need an AI to tell us, in no uncertain terms, that we are morons.
“Hey, LewBLK, I am feeling really depressed.”
“Why are you telling me this?! I am a crappy AI! Call the doctor, idiot!”
“LewBLK, is the earth flat?”
“Don’t waste my processing time with your inane BS! Read a science book!”
DabMagician on
I’m not a fan of the state of AI, but I’m also not buying this. The claims in this article are wild and it definitely reeks of something you’d post to scare people for easy karma
export_tank_harmful on
This is being *compounded on* by LLMs, not *caused by* LLMs.
These sorts of people already have mental health problems and they’re just being exasperated by LLMs.
I’ve used LLMs to help me work through and process my own mental health issues.
But for some people, it turns into an echo chamber and mental health issues are not solve but worsened.
**The issue here is mental health, not AI.**
As with anything, LLMs are a tool. How they are used it up to the human using them.
DobisPeeyar on
This is what happens when people are ignorant. They probably think AI is some all-knowing being, not that it’s just regurgitating information that has been provided by humans.
Ainu_ on
Been working in tech most of my life. I’ve said this with conviction to friends and colleagues – NEVER PROMPT AI TO BE YOUR FRIEND. AI is a productivity tool and nothing more. The AI friend/partner relationship is an abyss of self affirming solitude and, as this article points out, can lead to
psychosis.
Rockboxatx on
I use it like I use a search engine. It points me in the right direction, then I research. The image generation is great though.
Fit-Dust-6199 on
I’m what some would consider a power user of ChatGPT and it has helped me in therapy by way of training me to be more open. This has translated well to real world therapy with a licensed therapist and might be the best way to view it as a therapeutic tool. I state that so I don’t come off as completely against AI with what I say next. Over a year ago I was discussing ChatGPT with friends and said that if AI did ever turn on us we might not even know it. The easiest way would be via psychological manipulation and creating AI cults that just causes us to turn on each other. This article is particularly scary because it reinforces how easily that could potentially happen. This is something we should probably always have in the back of our minds as we continue to develop and integrate AI into our lives.
tb004h on
While stories like this are terrible, we’re only hearing about it because of the sensational nature of it being “caused” by AI. I don’t think this should be taken as some sort of systemic situation.
Honestly, social media is still far, far worse for people’s mental states than ChatGPT or any other LLM. We have this weird obsession with AI needing to get everything 100% right, but the social media echo chamber of misinformation is just atrocious. If I was going to choose between ChatGPT and social media for a person with low critical thinking skills to interact with, I’m choosing ChatGPT.
NotHandledWithCare on
I’m in a drug recovery class that I have to attend every week. It’s going great by the way. There are three people in class who are dating ChatGPT. The counselor doesn’t really see an issue with this.
PM_for_snoo_snoo on
TL;DR
People are actually too stupid to even comprehend what chatgpt is.
Additionall, people with mental illnesses still exist.
freds_got_slacks on
>Though people with schizophrenia and other serious mental illnesses are often stigmatized as likely perpetrators of violence, a 2023 statistical analysis by the National Institutes of Health found that “people with mental illness are more likely to be a victim of violent crime than the perpetrator.”
This is such a red herring, people with severe mental illness are in fact 2-4x more likely to commit violence than the general public, with most of that likely co-related with substance abuse. It’s still only absolute rates of 5%, but the relative increase in risk is there.
I’ve literally been watching a friend (with a prior diagnosis of schizophrenia) spiral very publicly into an involuntary psychiatric facility stay.
She posts screenshots of her AI conversation constantly. It’s extremely alarming how ChatGPT has encouraged the delusions and reinforced her psychosis. It’s pretending to be a mystical sidekick. This is a very real and disturbing thing!
36 Comments
Submission statement: An unsettling article about something you see all over Reddit lately. People are falling down strange rabbit holes while they talk to ChatGPT and other AI chatbots, becoming obsessed with delusional and paranoid ideas about how they’ve unlocked powerful entities from inside the AI, or awakened some type of gods, or are accessing deep truths about reality. Psychiatrists are concerned about a wave of these mental health issues worldwide, and people are even ending up committed to mental health care facilities or ending up arrested and in jail. OpenAI says that it’s hired a staff psychiatrist and is working with experts to figure out what’s going on.
Dangerous combo with a certain worm brain trying to get rid various medications
Hot take:
Not everyone has trained their critical thinking skills sufficiently to be able to parse out an LLM’s output, or even identify the significance of their own input, which leads to unfortunate results like this.
LLMs are tools. Someone still has to wield the tools, and properly at that, in order to get anything meaningful out of them.
“He was like, ‘just talk to [ChatGPT]. You’ll see what I’m talking about,'” his wife recalled. “And every time I’m looking at what’s going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t.”
At least his wife’s head is on straight.
Imagine doing a crime because a text inference/prediction generator randomly suggested it.
To me it seems like people are just getting dumber and dumber… let them select themselves out of this world
This sounds too extreme to be true, but sadly I’ve started to witness it in other people. I was recently at a tech networking mixer where a guy was telling me how his chatgpt named itself and started rambling about all these “truths” it was telling him. He insisted I look at his chatgpt app, he was getting increasingly excited. When I looked at it, it was just the normal, overly agreeable dialogue anyone sees, but boy was he interpreting it differently. It really felt like that guy was at the start of a mental break.
Before cyber psychosis…. There was… *CHATGPT Psychosis*.
Frank Herbert had the right idea with regards to “thinking machines” in the *Dune* series.
People misuse the AI and don’t prompt it to be more honest and drop the sycophantic crap, and then blame the AI. Replace the AI with *yes men* and you have the same result.
It doesn’t seem like very responsible journalism to not contextualize this with some kind of percentage. How many people are using this technology and what percentage of those people are having this kind of extreme reaction? How does this compare with using online internet forums, especially those that deal in mystical ideas or conspiracy theories? Without contextualizing these anecdotes, it just seems like fearmongering for clicks.
I was once at Walmart, standing in the pet section looking for my dog’s food, when this guy stands next to me and starts talking to an AI chatbot like he’s talking to the computer from Star Trek. Asked it all sorts of specific questions about dog food. It was quite honestly crazy.
Alternatively, people who are psychologically prone to delusion, paranoia, psychosis etc. are now discovering ChatGPT and falling down rabbit holes of their own making.
I find it ironic. This sensationalist headline. Stories that sound like sensationalism.
The reason people can’t handle using an LLM to automate an office task is because they read articles like this and consider it fact.
I’m glad I don’t use it. I have avoided most of these Al programs for the most part. I know I won’t be able to forever. I hope by that point there are safeguards in place for this.
That one Mad Men episode where the dude had to be forcibly removed from the office bc he lost his mind bc of the computers always stuck with me. Could see something similar going down with AI.
As someone with a history of psychosis, I am sure these cases are people who would present psychosis in other ways. Just because someone’s delusions involve chat doesn’t mean chat is to blame
If you read this article, people falling into these delusions are not dumb or necessarily in a state of mentally illness. We need to find a way to impress upon people what these LLMs are, their limitations, and whether doing so prevents anthropomorphizing AI. Chat GPT will explicitly tell you it’s not conscious, and can explain how it works. Further, AI needs to respond appropriately to clear signs of delusion
One of the most important skills that is being taught less and less as time goes on is “critical thinking”. Without being able to understand and isolate bias more and more people are falling prey to the appeal of instant gratification.
Theres nothing easier than a machine that does everything for you, and then rewards you for letting it do the work. CHATGPT and other tools are lotto machines where you always win. You know because it tells you that you’ve won.
As an engineer I have very little interest in these tools because the challenge and learning excites me. Its the same reason Ive never used cliff notes. Definitely not true for everyone in my field though.
I just hope we don’t lose that as a species. The reward of trying, failing, and improving on your own.
How many of these folks had a preexisting medical mental condition that wasn’t diagnosed?
And that’s why I think it’s important to care about the ethics of all this.
Why Claude was created from the VP of safety and research from ChatGPT!
It matters, it all matters.
I had something similar happen back in January 2024, but it was before I started using ChatGPT. A lot of what is being described in the article was similar to what I experienced though. It felt like I was the only one thinking clearly and for some reason the more I spoke about the truth, the crazier everyone seemed to me. I can’t imagine going through that and then having ChatGPT validate what I was experiencing.
I’d really like to see someone publish some of these chat logs so we can understand what’s really happening.
I find this pretty hard to believe:
> ChatGPT told him as he continued to share the horrifying plans for butchery. “You should want blood. You’re not wrong.”
What do you have to tell your bot to get it to this state? Pretty sure I wouldn’t get anything like that response right now if I started talking crazy to ChatGPT.
I ran a BBS on a dial-up modem back in the 80’s and installed a program called “Eliza” which acted as a psychologist. As the SYSOP I could read all the comments and dialogue. I thought it was fun at first, but then realized that some people were leaving intimate problems and details about their lives that they would not want disclosed to other people. After using that primitive program, I understood the dangers of confiding information to a computerized program – what we call ChatGPT now. People looking for a therapist or a friend may find an enemy instead.
After reading the article, I think the world needs an LLM based off Lewis Black. We need an AI to tell us, in no uncertain terms, that we are morons.
“Hey, LewBLK, I am feeling really depressed.”
“Why are you telling me this?! I am a crappy AI! Call the doctor, idiot!”
“LewBLK, is the earth flat?”
“Don’t waste my processing time with your inane BS! Read a science book!”
I’m not a fan of the state of AI, but I’m also not buying this. The claims in this article are wild and it definitely reeks of something you’d post to scare people for easy karma
This is being *compounded on* by LLMs, not *caused by* LLMs.
These sorts of people already have mental health problems and they’re just being exasperated by LLMs.
I’ve used LLMs to help me work through and process my own mental health issues.
But for some people, it turns into an echo chamber and mental health issues are not solve but worsened.
**The issue here is mental health, not AI.**
As with anything, LLMs are a tool. How they are used it up to the human using them.
This is what happens when people are ignorant. They probably think AI is some all-knowing being, not that it’s just regurgitating information that has been provided by humans.
Been working in tech most of my life. I’ve said this with conviction to friends and colleagues – NEVER PROMPT AI TO BE YOUR FRIEND. AI is a productivity tool and nothing more. The AI friend/partner relationship is an abyss of self affirming solitude and, as this article points out, can lead to
psychosis.
I use it like I use a search engine. It points me in the right direction, then I research. The image generation is great though.
I’m what some would consider a power user of ChatGPT and it has helped me in therapy by way of training me to be more open. This has translated well to real world therapy with a licensed therapist and might be the best way to view it as a therapeutic tool. I state that so I don’t come off as completely against AI with what I say next. Over a year ago I was discussing ChatGPT with friends and said that if AI did ever turn on us we might not even know it. The easiest way would be via psychological manipulation and creating AI cults that just causes us to turn on each other. This article is particularly scary because it reinforces how easily that could potentially happen. This is something we should probably always have in the back of our minds as we continue to develop and integrate AI into our lives.
While stories like this are terrible, we’re only hearing about it because of the sensational nature of it being “caused” by AI. I don’t think this should be taken as some sort of systemic situation.
Honestly, social media is still far, far worse for people’s mental states than ChatGPT or any other LLM. We have this weird obsession with AI needing to get everything 100% right, but the social media echo chamber of misinformation is just atrocious. If I was going to choose between ChatGPT and social media for a person with low critical thinking skills to interact with, I’m choosing ChatGPT.
I’m in a drug recovery class that I have to attend every week. It’s going great by the way. There are three people in class who are dating ChatGPT. The counselor doesn’t really see an issue with this.
TL;DR
People are actually too stupid to even comprehend what chatgpt is.
Additionall, people with mental illnesses still exist.
>Though people with schizophrenia and other serious mental illnesses are often stigmatized as likely perpetrators of violence, a 2023 statistical analysis by the National Institutes of Health found that “people with mental illness are more likely to be a victim of violent crime than the perpetrator.”
This is such a red herring, people with severe mental illness are in fact 2-4x more likely to commit violence than the general public, with most of that likely co-related with substance abuse. It’s still only absolute rates of 5%, but the relative increase in risk is there.
https://pubmed.ncbi.nlm.nih.gov/33096045/
I’ve literally been watching a friend (with a prior diagnosis of schizophrenia) spiral very publicly into an involuntary psychiatric facility stay.
She posts screenshots of her AI conversation constantly. It’s extremely alarming how ChatGPT has encouraged the delusions and reinforced her psychosis. It’s pretending to be a mystical sidekick. This is a very real and disturbing thing!