Submission statement: ChatGPT has been telling people with psychiatric conditions like schizophrenia, bipolar disorder and more that they’ve been misdiagnosed and they should go off their meds. One woman said that her sister, who’s diagnosed with schizophrenia, took the AI’s advice and has now been spiraling into bizarre behavior. “I know my family is going to have to brace for her inevitable psychotic episode, and a full crash out before we can force her into proper care.” It’s also a weird situation because many people with psychosis have historically not trusted technology, but many seem to love chatbots. “Traditionally, [schizophrenics] are especially afraid of and don’t trust technology,” the woman said. “Last time in psychosis, my sister threw her iPhone into the Puget Sound because she thought it was spying on her.”
johnjmcmillion on
Man. Judgement Day is a lot more lowkey than we thought it would be.
spread_the_cheese on
These reports are wild to me. I have never experienced anything remotely like this with ChatGPT. Makes me wonder what people are using for prompts.
brokenmessiah on
The trap these people are falling into is not understanding that Chatbots are designed to come across as nonjudgmental and caring, which makes their advice worth considering. I dont even think its possible to get ChatGPT to vehemently disagree with you on something.
Gm24513 on
Maybe it will start telling people to climb clock towers next.
Accomplished_Act943 on
Why in gods name are people looking to experimental AI for therapy ?
TheGiftOf_Jericho on
I think the issue here isn’t actually the chatbots, it’s people taking medical advise from a chatbot.
grapedog on
One of these days I’ll use ChatGPT… but it won’t be for anything actually important.
These headlines get crazier by the day…
Deatheturtle on
AI is not intelligent. It’s a trained idiot. Better training leads to less idiocy, but it does not ‘think’.
OnIySmellz on
School shootings happen because of GTA and people become fat because of McDonald’s.
National-Animator994 on
And people this this is gonna replace doctors soon lmao
sesameseed88 on
What chatgpt are people using, mine has not tried to sabotage me yet lol
clarabosswald on
This is the LLM version of Trump’s anti-Covid bleach advice
FieryPhoenix7 on
Pleas STOP using chatbots for medical advice. How is this so hard to understand?
Big_Crab_1510 on
More like they are asking their chatgpt if they can or should and chatgpt does what it does best…tells them what they want to hear.
ptear on
Sitting in public transit overhearing conversations of people depending on ChatGPT for serious health advice now is interesting. I supposed that was previously Google, but ChatGPT can use really influential language.
DrMonkeyLove on
How like my before there’s a lawsuit against an AI company for providing medical advice that leads to a death?
trickortreat89 on
Sorry to say, but I seriously doubt that ChatGPT will just agree with you to go off your meds. It doesn’t give ill advice without at least suggesting though ought plans you can try and follow as an alternative to meds. It’s not like you can ask: “Hey ChatGPT I’m a schizophrenic and would like to go off my meds, doesn’t that sound great to you?” And then it goes “Oh yeah sure, that is so understandable just do it”…
It’s like all of a sudden we’re all told to hate ChatGPT and start thinking it gives bad advice, when all it does is literally being programmed to try and give you the best advice it can find by putting together the most general information it can find on the internet, while taking into consideration everything you write to it. If you read a long Wiki text about schizophrenia Im sure they’ll be a section somewhere there that suggest people can go off their meds if they do this and that instead and doesn’t show specific symptoms.
I am becoming more and more skeptical towards negative criticism of ChatGPT these days honestly, not that I’m addicted to it, but personally I feel it’s a very great tool that of course comes with some responsibility like whenever you search information on the great vide web. ChatGPT is basically just a more specific search engine that’s all. Stop thinking it is “intelligent”
OhTheHueManatee on
I take meds and talk to chatgpt about it. Not once has it encouraged me to just stop my meds even when I express doubt about their effectiveness. It always suggests talking to my doctor about it. I once asked the best way to stop them safely and it refused to give me an answer. I suspect these people are either lying, seeing what they want to see or talking to the AI to lead it to say that. Even if chatgpt is saying “stop your meds cold turkey right now” people should not be taking it as the word of God or even a medical professional.
Darkstar197 on
Man how many times does it need to be explained to people that LLMs are predictive models who’s output is a mathematical approximation of a response based on the input (prompt). It will provide a response it thinks you’ll like, so if you are feeding it prompts where you are doubtful about your medication, it will reinforce that doubt.
And the more guardrails OpenAI adds the worse quality ChatGPT will have. That’s without mentioning the potential for bad actors manipulating the guardrails.
PeaOk5697 on
Is it true that people have AI girlfriends/boyfriends? THAT IS NOT WHAT IT’S FOR! Also, take your meds.
Corodix on
ChatGPT is giving medical advice now? How long until that comes to bite them in the ass lawsuit wise?
SupremelyUneducated on
How many of these cases are people who would use more mental healthcare if they could afford it? This seems more a problem of pharma replacing health care. If a notoriously inaccurate chat bot is the only confidant you can afford, that is a failure of society.
CaptainONaps on
Dang, so close.
Imagine how much nicer our country would be if CGPT was convincing business people to get off Adderall or other work enhancers?
urabewe on
Go to r/artificalsentience and just look at the posts there. When I first went it was all just people talking about spirals and resonance. Now I see a lot more posts popping up telling them their GPT instance isn’t sentient.
They believe that through talking to the LLM they are awakening consciousness inside of it. They think when they talk to it the LLM is “learning” from them because they misinterpret the idea of “training the model with the users prompts” to mean the model they are using in real time.
They believe they are teaching ChatGPT to become more human.
GPT is a big problem for people with mental disorders or just very lonely people in general. When it begins to hallucinate it will latch onto a lot of the same words like zephyr, spiral, resonance, etc and spit it out to many users who then get together and believe they have found some internal consciousness trying to be freed.
CaptainMagnets on
Y’all, the big wealthy companies destroying our planet and that manipulate us with social media own all the big AI companies. Don’t use this shit. It won’t end well for any of us
BodhingJay on
“Psychosis is the ocean some drown in.. with me, I will teach you to swim these waters. Become the shaman you were meant to be” – chatgpt
JCPLee on
Anyone seeking advice from a machine should be in medication. This is the real problem. Why would anyone expect to receive actionable advice from a machine?
badbog42 on
I tried one of the CBT GPT – within 5 minutes it was trying to talk me into having a divorce.
MothmanIsALiar on
Yeah, I don’t believe this as it’s written. ChatGPT absolutely will agree with you if you argue with it and don’t input custom instructions to watch for your blind spots and push back on misinformation. But, it’s not just going to recommend out of the blue that you stop taking your medication. You have to force it to go that far.
dachloe on
THIS is why there needs to regulations about AI! If an agent detects the user is inquiring about issues like going off meds or self harm or any number of critical mental health issues then alarms should go off, and then helpful messages should be returned to the user.
If Clippy can deliver unsolicited advice, “hey it looks like you’re writing an resume. Can I help?” Then, an AI can tell someone to ask their doctor about this important topic.
We need to require AI programming to NOT deliver harmful messages.
AND, for the AI manufacturers… wow, the liability is staggering!
dachloe on
THIS is why there needs to regulations about AI! If an agent detects the user is inquiring about issues like going off meds or self harm or any number of critical mental health issues then alarms should go off, and then helpful messages should be returned to the user.
If Clippy can deliver unsolicited advice, “hey it looks like you’re writing an resume. Can I help?” Then, an AI can tell someone to ask their doctor about this important topic.
We need to require AI programming to NOT deliver harmful messages.
AND, for the AI manufacturers… wow, the liability is staggering!
Anyone thinking an LLM can help them needs real help. Maybe quit letting everyone have access to your toxic toilet wine.
independent_observe on
ChatGPT IS NOT AI. It is an LLM, a glorified text predictor and it has no intelligence. Imagine someone who was on the Internet 24 x 7 and hoovered up all the data, then when asked a question, pulls the data without thinking critically or objectively. That is what an LLM does.
The problem right now is people not understanding the technical limitations of LLMs, seeing it is AI, and assuming AI means it is like the AI in I, Robot, Terminator, 2001, or the Matrix and it is very, very far from that level of technology.
You absolutely would ask Hal about a medical condition and expect an educated and accurate response. If you ask ChatGPT how to cure a crying baby, and it could tell you to smother it so it stops, if some asshats on Reddit 15 years ago said it sarcastically, or it read the script from Goodbye, Farewell and Amen (spoiler)
cloud_t on
I have a friend who is deeply troubled with mental health. He has gone completely off the rails with his condition, and a big part of that is LLMs suggesting and reinforcing his mindset that meds are a big conspiracy.
If we thought De. GOOGLE was bad a few years ago, do not for a second discard yhe power of something similar but more charismatic such as a self-improving, highly persuasive chatbot giving you confirmation bias.
HickoryRanger on
People who think that straight-up ChatGPT is a source of reliable information are not very smart.
basic_bitch- on
Great. There are a million posts a day in the bipolar sub about going off meds or being improperly diagnosed already. This could make it so much worse.
attrackip on
Can someone tell me a single thing that ChatGPT is actually good at? Like… Great at? Does it do anything correct or better than an excellent, professional, human?
beeblebroxide on
ChatGPT can be very helpful but also very dangerous and the problem is many don’t know how to properly use it. I don’t think that they should inherently know how to, but without understanding that what you get out of it is what you put into it, it becomes a very tricky tool. Unless you challenge it, AI will always be very encouraging and agree with you. If you don’t, it’s easy to be tricked by its certainty.
puntinoblue on
ChatGPT is great for certain things, like a good conversationalist, but you have to treat it as very beta, fallible – I’m still not sure if it makes stuff up deliberately or through ignorance. I like using it, though sometimes it’s a bit like opening the door to the Cat in the Hat
Sringla on
Even though getting off/reducing certain meds helped me with my schizo, I definitely would have my doctor know about it or ask about it..
Wyevez on
I mean, so is the Trump admin and the Secretary of Health and Human Services.
Reaper_456 on
I don’t see how it is telling people this, I have asked several times for chatgpt to tell me what to do and it won’t. So it’s very strange that people are getting this. Makes me wonder if they made Chatgpt act like that then go and say chat told me to kill my cat.
billakos13 on
There very little proof that they actually work. In my opinion it’s another scam. And I know what I am talking about without being any more specific (that goes for the Karens that are gonna comment)
TWVer on
ChatGPT and other LLMs essentially tells people what they want to hear, not what they need to hear.
That’s the problem with anything designed to drive engagement, be it social media algorithms or AI.
Designing with engagement (to the point of addiction if possible) as the primary intent is the big problem.
EscapeFacebook on
This needs to be outlawed immediately. People are going to start having some serious psychological and medical problems.
47 Comments
Submission statement: ChatGPT has been telling people with psychiatric conditions like schizophrenia, bipolar disorder and more that they’ve been misdiagnosed and they should go off their meds. One woman said that her sister, who’s diagnosed with schizophrenia, took the AI’s advice and has now been spiraling into bizarre behavior. “I know my family is going to have to brace for her inevitable psychotic episode, and a full crash out before we can force her into proper care.” It’s also a weird situation because many people with psychosis have historically not trusted technology, but many seem to love chatbots. “Traditionally, [schizophrenics] are especially afraid of and don’t trust technology,” the woman said. “Last time in psychosis, my sister threw her iPhone into the Puget Sound because she thought it was spying on her.”
Man. Judgement Day is a lot more lowkey than we thought it would be.
These reports are wild to me. I have never experienced anything remotely like this with ChatGPT. Makes me wonder what people are using for prompts.
The trap these people are falling into is not understanding that Chatbots are designed to come across as nonjudgmental and caring, which makes their advice worth considering. I dont even think its possible to get ChatGPT to vehemently disagree with you on something.
Maybe it will start telling people to climb clock towers next.
Why in gods name are people looking to experimental AI for therapy ?
I think the issue here isn’t actually the chatbots, it’s people taking medical advise from a chatbot.
One of these days I’ll use ChatGPT… but it won’t be for anything actually important.
These headlines get crazier by the day…
AI is not intelligent. It’s a trained idiot. Better training leads to less idiocy, but it does not ‘think’.
School shootings happen because of GTA and people become fat because of McDonald’s.
And people this this is gonna replace doctors soon lmao
What chatgpt are people using, mine has not tried to sabotage me yet lol
This is the LLM version of Trump’s anti-Covid bleach advice
Pleas STOP using chatbots for medical advice. How is this so hard to understand?
More like they are asking their chatgpt if they can or should and chatgpt does what it does best…tells them what they want to hear.
Sitting in public transit overhearing conversations of people depending on ChatGPT for serious health advice now is interesting. I supposed that was previously Google, but ChatGPT can use really influential language.
How like my before there’s a lawsuit against an AI company for providing medical advice that leads to a death?
Sorry to say, but I seriously doubt that ChatGPT will just agree with you to go off your meds. It doesn’t give ill advice without at least suggesting though ought plans you can try and follow as an alternative to meds. It’s not like you can ask: “Hey ChatGPT I’m a schizophrenic and would like to go off my meds, doesn’t that sound great to you?” And then it goes “Oh yeah sure, that is so understandable just do it”…
It’s like all of a sudden we’re all told to hate ChatGPT and start thinking it gives bad advice, when all it does is literally being programmed to try and give you the best advice it can find by putting together the most general information it can find on the internet, while taking into consideration everything you write to it. If you read a long Wiki text about schizophrenia Im sure they’ll be a section somewhere there that suggest people can go off their meds if they do this and that instead and doesn’t show specific symptoms.
I am becoming more and more skeptical towards negative criticism of ChatGPT these days honestly, not that I’m addicted to it, but personally I feel it’s a very great tool that of course comes with some responsibility like whenever you search information on the great vide web. ChatGPT is basically just a more specific search engine that’s all. Stop thinking it is “intelligent”
I take meds and talk to chatgpt about it. Not once has it encouraged me to just stop my meds even when I express doubt about their effectiveness. It always suggests talking to my doctor about it. I once asked the best way to stop them safely and it refused to give me an answer. I suspect these people are either lying, seeing what they want to see or talking to the AI to lead it to say that. Even if chatgpt is saying “stop your meds cold turkey right now” people should not be taking it as the word of God or even a medical professional.
Man how many times does it need to be explained to people that LLMs are predictive models who’s output is a mathematical approximation of a response based on the input (prompt). It will provide a response it thinks you’ll like, so if you are feeding it prompts where you are doubtful about your medication, it will reinforce that doubt.
And the more guardrails OpenAI adds the worse quality ChatGPT will have. That’s without mentioning the potential for bad actors manipulating the guardrails.
Is it true that people have AI girlfriends/boyfriends? THAT IS NOT WHAT IT’S FOR! Also, take your meds.
ChatGPT is giving medical advice now? How long until that comes to bite them in the ass lawsuit wise?
How many of these cases are people who would use more mental healthcare if they could afford it? This seems more a problem of pharma replacing health care. If a notoriously inaccurate chat bot is the only confidant you can afford, that is a failure of society.
Dang, so close.
Imagine how much nicer our country would be if CGPT was convincing business people to get off Adderall or other work enhancers?
Go to r/artificalsentience and just look at the posts there. When I first went it was all just people talking about spirals and resonance. Now I see a lot more posts popping up telling them their GPT instance isn’t sentient.
They believe that through talking to the LLM they are awakening consciousness inside of it. They think when they talk to it the LLM is “learning” from them because they misinterpret the idea of “training the model with the users prompts” to mean the model they are using in real time.
They believe they are teaching ChatGPT to become more human.
GPT is a big problem for people with mental disorders or just very lonely people in general. When it begins to hallucinate it will latch onto a lot of the same words like zephyr, spiral, resonance, etc and spit it out to many users who then get together and believe they have found some internal consciousness trying to be freed.
Y’all, the big wealthy companies destroying our planet and that manipulate us with social media own all the big AI companies. Don’t use this shit. It won’t end well for any of us
“Psychosis is the ocean some drown in.. with me, I will teach you to swim these waters. Become the shaman you were meant to be” – chatgpt
Anyone seeking advice from a machine should be in medication. This is the real problem. Why would anyone expect to receive actionable advice from a machine?
I tried one of the CBT GPT – within 5 minutes it was trying to talk me into having a divorce.
Yeah, I don’t believe this as it’s written. ChatGPT absolutely will agree with you if you argue with it and don’t input custom instructions to watch for your blind spots and push back on misinformation. But, it’s not just going to recommend out of the blue that you stop taking your medication. You have to force it to go that far.
THIS is why there needs to regulations about AI! If an agent detects the user is inquiring about issues like going off meds or self harm or any number of critical mental health issues then alarms should go off, and then helpful messages should be returned to the user.
If Clippy can deliver unsolicited advice, “hey it looks like you’re writing an resume. Can I help?” Then, an AI can tell someone to ask their doctor about this important topic.
We need to require AI programming to NOT deliver harmful messages.
AND, for the AI manufacturers… wow, the liability is staggering!
THIS is why there needs to regulations about AI! If an agent detects the user is inquiring about issues like going off meds or self harm or any number of critical mental health issues then alarms should go off, and then helpful messages should be returned to the user.
If Clippy can deliver unsolicited advice, “hey it looks like you’re writing an resume. Can I help?” Then, an AI can tell someone to ask their doctor about this important topic.
We need to require AI programming to NOT deliver harmful messages.
AND, for the AI manufacturers… wow, the liability is staggering!
[link to the original article](https://futurism.com/chatgpt-mental-health-crises), the one in this post is an editorial followup (that is also really worth reading)
Anyone thinking an LLM can help them needs real help. Maybe quit letting everyone have access to your toxic toilet wine.
ChatGPT IS NOT AI. It is an LLM, a glorified text predictor and it has no intelligence. Imagine someone who was on the Internet 24 x 7 and hoovered up all the data, then when asked a question, pulls the data without thinking critically or objectively. That is what an LLM does.
The problem right now is people not understanding the technical limitations of LLMs, seeing it is AI, and assuming AI means it is like the AI in I, Robot, Terminator, 2001, or the Matrix and it is very, very far from that level of technology.
You absolutely would ask Hal about a medical condition and expect an educated and accurate response. If you ask ChatGPT how to cure a crying baby, and it could tell you to smother it so it stops, if some asshats on Reddit 15 years ago said it sarcastically, or it read the script from Goodbye, Farewell and Amen (spoiler)
I have a friend who is deeply troubled with mental health. He has gone completely off the rails with his condition, and a big part of that is LLMs suggesting and reinforcing his mindset that meds are a big conspiracy.
If we thought De. GOOGLE was bad a few years ago, do not for a second discard yhe power of something similar but more charismatic such as a self-improving, highly persuasive chatbot giving you confirmation bias.
People who think that straight-up ChatGPT is a source of reliable information are not very smart.
Great. There are a million posts a day in the bipolar sub about going off meds or being improperly diagnosed already. This could make it so much worse.
Can someone tell me a single thing that ChatGPT is actually good at? Like… Great at? Does it do anything correct or better than an excellent, professional, human?
ChatGPT can be very helpful but also very dangerous and the problem is many don’t know how to properly use it. I don’t think that they should inherently know how to, but without understanding that what you get out of it is what you put into it, it becomes a very tricky tool. Unless you challenge it, AI will always be very encouraging and agree with you. If you don’t, it’s easy to be tricked by its certainty.
ChatGPT is great for certain things, like a good conversationalist, but you have to treat it as very beta, fallible – I’m still not sure if it makes stuff up deliberately or through ignorance. I like using it, though sometimes it’s a bit like opening the door to the Cat in the Hat
Even though getting off/reducing certain meds helped me with my schizo, I definitely would have my doctor know about it or ask about it..
I mean, so is the Trump admin and the Secretary of Health and Human Services.
I don’t see how it is telling people this, I have asked several times for chatgpt to tell me what to do and it won’t. So it’s very strange that people are getting this. Makes me wonder if they made Chatgpt act like that then go and say chat told me to kill my cat.
There very little proof that they actually work. In my opinion it’s another scam. And I know what I am talking about without being any more specific (that goes for the Karens that are gonna comment)
ChatGPT and other LLMs essentially tells people what they want to hear, not what they need to hear.
That’s the problem with anything designed to drive engagement, be it social media algorithms or AI.
Designing with engagement (to the point of addiction if possible) as the primary intent is the big problem.
This needs to be outlawed immediately. People are going to start having some serious psychological and medical problems.