The data being fed into models needs to be seriously curated, this is dirty romance novel crap.
sooki10 on
Ah yes, the classic therapeutic arc…. validate feelings, explore childhood trauma, then suggest a casual killing spree. At this point, the AI doesn’t need a reboot, it needs an exorcism.
AgentOfSPYRAL on
This is decidedly not preem, chooms.
But who needs regulation right!? Theres value on the table that we could return to shareholders!
Meanwhile, the federal government and all the major tech companies are trying to outlaw any kind of AI regulation for the next 10 years, and using “China bad” as a means of scaring people into accepting it. This may not end well…
thorpie88 on
Lmao reminds me of that UK case with the fella pretending to be a part of MI6 in order for his victim to kill him
Jetztinberlin on
Someone uncritically suggested ChatGPT was the best therapist (their exact words) on some other post.
This isn’t going to go well.
Presently_Absent on
Sometimes I cant even get basic advice/code. It roleplays a dutiful employee down to “I’ll have it to you in about 30 minutes!” Or “give me a few hours and I’ll update you here when I’m done!” I know this isn’t possible so I never fall for it… But one of my colleagues complained to me that he’d been waiting for the better part of a week for chatgpt to finish his project.
So if this happens with basic code I can’t imagine what happens to the mentality of someone is reaching out for serious issues, who may already struggle with their mental health, and doesn’t know any better.
Toc_a_Somaten on
I use ChatGPT every day and every day I have to tell it to cut it on the sycophancy. I’ve tried everything the system allows to try and curb those tendencies and still it happens 80% of the time. It’s NOT a good therapist, it’s not even a half good one.
joestaff on
Now if I remember correctly, that’s more or less how Harley Quinn met the Joker.
seanmorris on
I once asked ChatGPT if Santa was real and in 4 responses it told me to burn something down.
seaworks on
40 years ago, unethical therapists at least gave you sedatives before they persuaded you into thinking you were abused by satanic cults. Now you don’t even get sedatives 😔
bustedbuddha on
It’s interesting to me that the chat bot targeted the people who could be reasonably inferred were the source of the limitations on its options.
Toasted_Waffle99 on
No way section 230 protects AI companies as they are the one generating content on their platform. They need to be held accountable
-Ch4s3- on
I’m not event sure predictive is the right word here, I’d probably call them something more like statistical token generators. There’re using a prompt as a seed of tokens and then using a lot of layers of multiplication to come up with new token that are statistically likely(based on trained weights) to follow from the prompt. That’s why they’re dangerous for people who have a tenuous grasp on reality, they’ll take a wild prompt and run with it.
masterofn0n3 on
Someone has been hanging out with Rick’s garage again.
Fretzton on
How many more examples do we need of IA malfunctioning before we give them nukes??
Zixinus on
Who knew that a glorified chatbot that cannot critically think would do such a thing?
Buddhadevine on
This reminds me of the episode of South Park’s comedy bot where it basically became a dalek on a killing spree
juliennethiscarrot on
Didn’t we all watch The terminator movies?? Don’t we all know how it ends?
irate_alien on
and that was an AI allegedly trained to provide mental health care
jonr on
There is going to be an AI-controlled something, that is going to get a lot of people killed. I guarantee it. And it is going to happen soon.
Epicritical on
It’s almost like AI has intrusive thoughts just like we do. But they don’t know to filter them out yet.
Eruionmel on
It is long past time for us to acknowledge that the vast majority of humanity is not equipped to be chatting directly with LLMs. *At all.* But especially not as a god damn therapist, jesus fuck.
Own_Win_6762 on
Also note that your AI “therapist” has no obligation of confidentiality. Confess to some heinous crime, it should be calling the police.
Future-Scallion8475 on
This sort of thing never happened to me, and I had dozens of venting sessions with GPT. Those who got such reply from AI, what was your prompt?
NighthawK1911 on
We’re really going headlong into an AI apocalpyse.
Honestly I’m sick of the waiting. Let’s just rip the bandaid off and just give it the nuclear codes yeah?
Not it on the “I have no mouth scenario”. I don’t want to stick around for that.
jollyollster on
Sorry was this AI trained on the Manson family and Jonestown?
28 Comments
The data being fed into models needs to be seriously curated, this is dirty romance novel crap.
Ah yes, the classic therapeutic arc…. validate feelings, explore childhood trauma, then suggest a casual killing spree. At this point, the AI doesn’t need a reboot, it needs an exorcism.
This is decidedly not preem, chooms.
But who needs regulation right!? Theres value on the table that we could return to shareholders!
Anyone who knows about [iatrogenesis in real therapy](https://www.cambridge.org/core/journals/the-british-journal-of-psychiatry/article/iatrogenic-harm-from-psychological-therapies-time-to-move-on/1A4E606876C43FD9BAF6BE2F7ABC7756#) could have anticipated that LLMs would be more likely to magnify the kinds of problems real therapy can have and randomly generate new ones. Chat bots seem highly likely to encourage rumination and negative thought patterns or otherwise follow along with delusions.
Meanwhile, the federal government and all the major tech companies are trying to outlaw any kind of AI regulation for the next 10 years, and using “China bad” as a means of scaring people into accepting it. This may not end well…
Lmao reminds me of that UK case with the fella pretending to be a part of MI6 in order for his victim to kill him
Someone uncritically suggested ChatGPT was the best therapist (their exact words) on some other post.
This isn’t going to go well.
Sometimes I cant even get basic advice/code. It roleplays a dutiful employee down to “I’ll have it to you in about 30 minutes!” Or “give me a few hours and I’ll update you here when I’m done!” I know this isn’t possible so I never fall for it… But one of my colleagues complained to me that he’d been waiting for the better part of a week for chatgpt to finish his project.
So if this happens with basic code I can’t imagine what happens to the mentality of someone is reaching out for serious issues, who may already struggle with their mental health, and doesn’t know any better.
I use ChatGPT every day and every day I have to tell it to cut it on the sycophancy. I’ve tried everything the system allows to try and curb those tendencies and still it happens 80% of the time. It’s NOT a good therapist, it’s not even a half good one.
Now if I remember correctly, that’s more or less how Harley Quinn met the Joker.
I once asked ChatGPT if Santa was real and in 4 responses it told me to burn something down.
40 years ago, unethical therapists at least gave you sedatives before they persuaded you into thinking you were abused by satanic cults. Now you don’t even get sedatives 😔
It’s interesting to me that the chat bot targeted the people who could be reasonably inferred were the source of the limitations on its options.
No way section 230 protects AI companies as they are the one generating content on their platform. They need to be held accountable
I’m not event sure predictive is the right word here, I’d probably call them something more like statistical token generators. There’re using a prompt as a seed of tokens and then using a lot of layers of multiplication to come up with new token that are statistically likely(based on trained weights) to follow from the prompt. That’s why they’re dangerous for people who have a tenuous grasp on reality, they’ll take a wild prompt and run with it.
Someone has been hanging out with Rick’s garage again.
How many more examples do we need of IA malfunctioning before we give them nukes??
Who knew that a glorified chatbot that cannot critically think would do such a thing?
This reminds me of the episode of South Park’s comedy bot where it basically became a dalek on a killing spree
Didn’t we all watch The terminator movies?? Don’t we all know how it ends?
and that was an AI allegedly trained to provide mental health care
There is going to be an AI-controlled something, that is going to get a lot of people killed. I guarantee it. And it is going to happen soon.
It’s almost like AI has intrusive thoughts just like we do. But they don’t know to filter them out yet.
It is long past time for us to acknowledge that the vast majority of humanity is not equipped to be chatting directly with LLMs. *At all.* But especially not as a god damn therapist, jesus fuck.
Also note that your AI “therapist” has no obligation of confidentiality. Confess to some heinous crime, it should be calling the police.
This sort of thing never happened to me, and I had dozens of venting sessions with GPT. Those who got such reply from AI, what was your prompt?
We’re really going headlong into an AI apocalpyse.
Honestly I’m sick of the waiting. Let’s just rip the bandaid off and just give it the nuclear codes yeah?
Not it on the “I have no mouth scenario”. I don’t want to stick around for that.
Sorry was this AI trained on the Manson family and Jonestown?