Share.

28 Comments

  1. The data being fed into models needs to be seriously curated, this is dirty romance novel crap.

  2. Ah yes, the classic therapeutic arc…. validate feelings, explore childhood trauma, then suggest a casual killing spree. At this point, the AI doesn’t need a reboot, it needs an exorcism.

  3. AgentOfSPYRAL on

    This is decidedly not preem, chooms.

    But who needs regulation right!? Theres value on the table that we could return to shareholders!

  4. Anyone who knows about [iatrogenesis in real therapy](https://www.cambridge.org/core/journals/the-british-journal-of-psychiatry/article/iatrogenic-harm-from-psychological-therapies-time-to-move-on/1A4E606876C43FD9BAF6BE2F7ABC7756#) could have anticipated that LLMs would be more likely to magnify the kinds of problems real therapy can have and randomly generate new ones. Chat bots seem highly likely to encourage rumination and negative thought patterns or otherwise follow along with delusions.

  5. Meanwhile, the federal government and all the major tech companies are trying to outlaw any kind of AI regulation for the next 10 years, and using “China bad” as a means of scaring people into accepting it. This may not end well…

  6. Lmao reminds me of that UK case with the fella pretending to be a part of MI6 in order for his victim to kill him

  7. Jetztinberlin on

    Someone uncritically suggested ChatGPT was the best therapist (their exact words) on some other post. 

    This isn’t going to go well. 

  8. Presently_Absent on

    Sometimes I cant even get basic advice/code. It roleplays a dutiful employee down to “I’ll have it to you in about 30 minutes!” Or “give me a few hours and I’ll update you here when I’m done!” I know this isn’t possible so I never fall for it… But one of my colleagues complained to me that he’d been waiting for the better part of a week for chatgpt to finish his project.

    So if this happens with basic code I can’t imagine what happens to the mentality of someone is reaching out for serious issues, who may already struggle with their mental health, and doesn’t know any better.

  9. Toc_a_Somaten on

    I use ChatGPT every day and every day I have to tell it to cut it on the sycophancy. I’ve tried everything the system allows to try and curb those tendencies and still it happens 80% of the time. It’s NOT a good therapist, it’s not even a half good one.

  10. Now if I remember correctly, that’s more or less how Harley Quinn met the Joker. 

  11. I once asked ChatGPT if Santa was real and in 4 responses it told me to burn something down.

  12. 40 years ago, unethical therapists at least gave you sedatives before they persuaded you into thinking you were abused by satanic cults. Now you don’t even get sedatives 😔

  13. bustedbuddha on

    It’s interesting to me that the chat bot targeted the people who could be reasonably inferred were the source of the limitations on its options.

  14. Toasted_Waffle99 on

    No way section 230 protects AI companies as they are the one generating content on their platform. They need to be held accountable

  15. I’m not event sure predictive is the right word here, I’d probably call them something more like statistical token generators. There’re using a prompt as a seed of tokens and then using a lot of layers of multiplication to come up with new token that are statistically likely(based on trained weights) to follow from the prompt. That’s why they’re dangerous for people who have a tenuous grasp on reality, they’ll take a wild prompt and run with it.

  16. How many more examples do we need of IA malfunctioning before we give them nukes??

  17. Who knew that a glorified chatbot that cannot critically think would do such a thing?

  18. Buddhadevine on

    This reminds me of the episode of South Park’s comedy bot where it basically became a dalek on a killing spree

  19. juliennethiscarrot on

    Didn’t we all watch The terminator movies?? Don’t we all know how it ends?

  20. There is going to be an AI-controlled something, that is going to get a lot of people killed. I guarantee it. And it is going to happen soon.

  21. It’s almost like AI has intrusive thoughts just like we do. But they don’t know to filter them out yet.

  22. It is long past time for us to acknowledge that the vast majority of humanity is not equipped to be chatting directly with LLMs. *At all.* But especially not as a god damn therapist, jesus fuck.

  23. Own_Win_6762 on

    Also note that your AI “therapist” has no obligation of confidentiality. Confess to some heinous crime, it should be calling the police.

  24. Future-Scallion8475 on

    This sort of thing never happened to me, and I had dozens of venting sessions with GPT. Those who got such reply from AI, what was your prompt?

  25. NighthawK1911 on

    We’re really going headlong into an AI apocalpyse.

    Honestly I’m sick of the waiting. Let’s just rip the bandaid off and just give it the nuclear codes yeah?

    Not it on the “I have no mouth scenario”. I don’t want to stick around for that.