Submission Statement: Using emotional language affects the “anxiety” of LLMs.
#Abstract
>The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate “anxiety” in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4’s reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs’ “emotional states” can foster safer and more ethical human-AI interactions.
I guess people shouldn’t traumadump on the AI therapists.
TheCassiniProjekt on
How is it possible for an LLM to have emotional states without chemical stimuli?
opisska on
I still don’t understand why should I care about the pretend “emotions” of a piece of code.
3 Comments
Submission Statement: Using emotional language affects the “anxiety” of LLMs.
#Abstract
>The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate “anxiety” in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4’s reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs’ “emotional states” can foster safer and more ethical human-AI interactions.
I guess people shouldn’t traumadump on the AI therapists.
How is it possible for an LLM to have emotional states without chemical stimuli?
I still don’t understand why should I care about the pretend “emotions” of a piece of code.