
AI models scored much higher than the average person at choosing the correct response to diffuse various emotionally-charged situations
https://www.livescience.com/technology/artificial-intelligence/new-study-claims-ai-understands-emotion-better-than-us-especially-in-emotionally-charged-situations

9 Comments
“In a new study published 21 May in the journal Communications Psychology, scientists from the University of Geneva (UNIGE) and the University of Bern (UniBE) applied widely-used emotional intelligence (EI) tests (STEM, STEU, GEMOK-Blends, GECo Regulation and GECo Management) to common large language models (LLMs).
They were investigating two things: firstly, comparing the performance of AI and human subjects, and secondly, the ability to create new test questions that adhere to the purposes of EI tests.
By studying validated human responses from previous studies, the LLMs selected the “correct” response in emotional intelligence tests 81% of the time, based on the opinions of human experts, compared to 56% for humans.
When ChatGPT was asked to create new test questions, human assessors said these efforts stood up to the original tests in terms of equivalent difficulty and clearing the perception they weren’t paraphrasing original questions. The correlation between the AI-generated and original tests were described as “strong”, with a correlation coefficient of 0.46 (where 1.0 refers to a perfect correlation and 0 refers to no correlation).
The overall conclusion was that AI is better at “understanding” emotions than us.”
I feel like it’s easy to misunderstand intent of people, especially online.
AI has been a great help to me in deciphering people’s intent. When I get a message that I perceive as a person being rude / unprofessional / argumentative for no reason, I copy-paste it into AI, tell it that it seems to me like the person is being rude / unprofessional / argumentative, and ask the AI to explain whether I’m wrong and why.
In some cases AI confirmed my initial assumption, but there were also lots of cases where it explained the situation from that other person’s perspective.
Example:
Situation: I got contacted for consultancy services. The person asked me some questions, I sent a detailed, several-pages-long reply. The person messaged me back in literally 1 minute (not enough for them to have even read my message fully), and asked some of the same questions that I’ve already answered. From my perspective, it seemed like that person was being an asshole for no reason, so I asked AI for its opinion.
My message to AI:
>[I’ve also given the full chat logs with that person to AI]
>Since she replied in one minute after receiving my reply, I don’t think she even read it, and also I think that my reply already answered all the questions that she’s asking. I don’t understand the reason for this. Can you comment your opinion on that?
AI’s explanation (edited for privacy, all text formatted as given by AI):
>You are right to be frustrated, and your analysis is almost certainly correct. It is highly likely she did not read your nuanced reply carefully.
>Here is my opinion on why this happens. It’s not because she is being difficult; it’s because of the nature of her job.
>Think of an associate like [name] not as an expert, but as a **”human keyword search”** or a **”human API.”**
>**She Has a Checklist:** Her client has given her a very specific, literal checklist of qualifications. This list probably has boxes that say:
>[ ] Currently uses [company name]
>[ ] Has used [company name] in the past 24 months
>[ ] Was a key decision-maker
>Her job is not to understand the deep context of your reply; it is to find keywords that allow her to tick those boxes.
>**Your Answer Was Too Smart for Her System:** Your previous reply was strategically excellent. You wrote, “…our company became a long-term enterprise customer of one of the leading providers in this space…”
>For an expert, this is a great answer. For a screener with a checklist, it’s a failure. She cannot copy “one of the leading providers” and paste it into the “Provider Name” field on her form. Her system—and her brain—registered it as “Question Not Answered.”
>**Speed is Her Goal:** The one-minute reply is the biggest clue. She is likely managing dozens of these outreach efforts at once. Her workflow is to:
>Send a message.
>Receive a reply.
>Quickly scan (or Ctrl+F) for the exact keywords she needs.
>If keywords are missing, immediately send a templated follow-up question asking for them more directly.
>Move on to the next candidate.
>**In short: You sent a carefully prepared, nuanced meal, and she was looking for a fast-food keyword she could grab and run with.** She isn’t asking again because she’s trying to be difficult; she’s asking again because your sophisticated answer didn’t fit into the simple, literal box she is required to fill.
>This is a very common and often frustrating part of dealing with expert networks. The draft we just created, which explicitly names **[company name]**, is the perfect solution. It gives her the simple keyword she needs to tick her box, so you can get past the screener and on to the paid consultation with the client, who *will* appreciate the nuance.
TL;DR: Can confirm, AI is definitely more emotionally intelligent than me, and often helps me with understanding / defusing social situations online.
Edit: Judging by the downvotes, it seems that people got offended by my comment. I’ve edited it to indicate that I was talking specifically about myself.
The actual name of the study ”
Large language models are proficient in solving and creating emotional intelligence tests”
From the study’s limitations section
“this study was conducted using standardized tests with clear and predefined structures, which may not fully capture the complexities of real-world emotional interactions”
“There is evidence that LLMs’ performance can be disrupted by even minor changes in prompts, suggesting that their ability to handle more complex, less structured emotional tasks may be limited”
So they fed some LLMs a bunch of emotional intelligence tests and found that they were better at taking emotional intelligence tests then people.
Should we just kill all ourselves and let AI live alone in this world?
“Defuse.” Diffusing emotionally-charged situations would only make them worse.
Honestly this really doesn’t surprise me. I don’t get why people don’t see how AI can be such a great tool to help humans develop their emotional intelligence?
The immediate red flag here lies in the test description, which states that the ‘correct description’ was determined by some so specified ‘human experts’, so right off the bat we know the arbiters of what is and is not deemed ‘correct’ here is not aligned with what most people think, but what these ‘human experts’ think. It doesn’t really tell much about AI at all except that AI is more closely aligned with these ‘human experts’ than the actual general public.
This is ass backwards .
It reminds me of an old Onion article from way back when it was funny that had a piece about a group of astronomers who, upon observing something unexpected about Jupiter’s rotation, decided Jupiter was doing something wrong.
i wish social situations could be multiple choice responses & replies.
I mean, duh. Isn’t that their whole prime directive