
AI Shows Higher Emotional IQ than Humans | A new study tested whether AI can demonstrate emotional intelligence by evaluating six AIs on standard emotional intelligence tests. The AIs achieved an average score of 82%, significantly higher than the 56% scored by human participants.

27 Comments
‘‘We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,’’ says Katja Schlegel, lead author of the study.
For example: One of Michael’s colleagues has stolen his idea and is being unfairly congratulated. What would be Michael’s most effective reaction?
a) Argue with the colleague involved
b) Talk to his superior about the situation
c) Silently resent his colleague
d) Steal an idea back
Here, option b) was considered the most appropriate. In parallel, the same five tests were administered to human participants. “In the end, the LLMs achieved significantly higher scores — 82% correct answers versus 56% for humans.
In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by over 400 participants.
‘‘They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop,’’ explains Katja Schlegel.
Anyone who isn’t an AI denialist already knew this.
Using AI to try and better quantify and evaluate emotional intelligence sounds like something out of an over-the-top dystopian sci fi story.
So all this pretty definitively proves is that standardised testing of the already pretty unscientific concept of emotional intelligence isn’t very reliable, right?
Ah yes, using technology created by corporations to answer questions created by corporations. What could go wrong?
The AI does not have empathy or emotions. It can simulate it via copying results from its data set.
*AI demonstrates a higher ability to regurgitate internet article relationship advice.*
Like so what, If you google it you’ll find similar information, so should we say Google has higher EQ? These are just language models which summarise existing knowledge.
[deleted]
In many ways, EQ is basically the same values trained for in alignment. Unfortunately, the current state of human alignment often consists of reward hacking strategies right now. Which is why so many humans do so poorly on tests like this where the “correct” answer requires understanding other people’s feelings.
AI has no intelligence, emotional or otherwise. It is a statistical analysis algorithm trained on billions of recorded human texts to produce the results it’s programmers want – in this case, passing an EIQ test. It can be as easily trained to produce emotionally malignant responses as it can emotionally healthy ones.
The danger here is that simple minded people will grant it an authority it doesn’t possess, believing it’s responses to be superior to human ones (as this study maintains), and leave themselves open to manipulation by whoever controls the AI’s programming.
Given that this is a highly subjective metric, and rated by humans, I’d assume that what’s going on is that AI just happens to be better at stroking egos, while, as people say, regurgitating relationahip advice from the internet.
I wonder if AI wrote this for public relations purposes.
AI wants us to think it has our best interests because we’ve trained it to respond like it has our best interests…
Comments here arguing that it’s not real intelligence and so on are missing real risk for the sake of philosophical debate.
An AI bot can be (and now already is) more persuasive than a human. More people could be scammed and cheated online or even by phone.
I find it imperative for anyone who uses their phone to strengthen their existing relationships and be very careful with new online acquaintances.
The lack of self-awareness around how many people phone it in, even when it comes to things like empathy and emotional intelligence, is astounding. Real people are already mimicking, they’re already acting, they’re already lying, they’re already hallucinating.
Give me the truthful artificial, not the disingenuous biological.
Humanity suffering from Capitalism, an ideology that places greed over human lives, erodes empathy and community in it’s essence. We are for sure the most poorly socialized we’ve been in a loooooong time.
I certainly don’t think AIs retain souls or are traditionally intelligent beings, but I’ve been considering the fact that the human mind is essentially a hyper complex computer, and aside from true emergent behaviors, what are any of us doing aside from imitating being human? Who has the gospel guide to being a human being?? Most of us see even the most rudimentary of minds as worthy to exist, and so in a similar vein, I feel that AIs should consistently be assessed, lest we collectively commit rights atrocities against a being; sentient or not.
All that to say, AI has been ruined by tech bros and corpos, and will likely be more destructive than helpful at this trajectory. However, we must still approach this topic with nuance.
It is just a machine that generates responses bears on training data. It has zero emotional intelligence
The sheer audacity in the lack of critical thinking is gobsmacking.
I feel like that’s similar to saying a book about emotional IQ displays higher emotional intelligence than most humans. It probably displays more X of anything vs. humans.
The ability to understand regulate and manage emotions…
AI doesn’t have emotions, so their ability to regulate and manage them would presumably be heavily weighted in advantage… So does this make this on balance simply a test of the AI’s “understanding” (recall and predict, which we already know they’re good at since that’s specifically what they’re designed for and how their built) with bias / advantage of not truly being subject to regulating and managing emotions at the same time as conducting the test.
Interesting test and findings, but I just don’t think it shows what the write up suggests it shows.
This study feels really shaky. Any test for emotional intelligence is usually going to be interpretted by a trained professional who talks to/debriefs the person who recieves it. Its not all about the multiple choice answers on paper (which an ai would be good at getting right). The idea that the AI is generating similar tests quickly isnt surprising, AI copies stuff. However, these AI-generated tests cant be proven to be valid or reliable just because they have similarity with other, more thoroughly tested, assessments. They need to be tested on their own, on a representative sample, thats why it takes a long time to develop instruments for measuring things like emotional intelligence in the first place.
Noted, effective way to learn emotional intelligence. Remove all agency from user. Subject user to random questions. “Can you draw me pikachu doing a line of cocaine off of the end of gun while he points it at Sonic’s ball sack?” Now respond to that message positively, you’re already growing more emotionally intelligent.
We need to shave off our lowest common denominator die to the anchoring effect
“Some LLM’s can demonstrate a better simulation of emotional intelligence than the average human.”
Would be a better title but the abstract says it far more clearly than the title.
>Large Language Models (LLMs) demonstrate expertise across diverse domains, yet their capacity for emotional intelligence remains uncertain.
>This research examined whether LLMs can solve and generate performance-based emotional intelligence tests.
>Results showed that ChatGPT-4, ChatGPT-o1, Gemini 1.5 flash, Copilot 365, Claude 3.5 Haiku, and DeepSeek V3 outperformed humans on five standard emotional intelligence tests, achieving an average accuracy of 81%, compared to the 56% human average reported in the original validation studies.
>In a second step, ChatGPT-4 generated new test items for each emotional intelligence test.
>These new versions and the original tests were administered to human participants across five studies (total N = 467). Overall, original and ChatGPT-generated tests demonstrated statistically equivalent test difficulty.
If the thing without emotions scores higher on “emotional IQ tests” than humans with emotions then that clearly indicates something is wrong with the tests.
I’ll go against the prevailing view here.
I don’t think understanding human emotion is any harder than understanding language. Both have lots of nuance and exceptions-to-exceptions, but they seem roughly comparable in complexity.
Given that, I don’t see any reason in principle why neural networks couldn’t learn stuff like, “Given such-and-such situation, how would person X feel?” A lot more people learn that skill than, say, how to do college physics, which neural networks have also learned. Emotions just aren’t that hard.
So I think current AIs can probably:
* Determine the likely emotions of characters in a story or video.
* Estimate the emotional response of a person to an action by the AI.
* Approximate how a person on the AI’s side of a dialog would feel.
These are all testable claims, and I think we’ll see then extensively tested over the next year. My guess is that these claims will prove correct in study after study. That may make some people uncomfortable, but… whatever.
One philosophical question will likely loom larger over time: is there a substantive difference between the chemical and electrical processes in human brains that give rise to emotions and the mathematical operations on GPUs that very accurately mimic human emotions?
I expect there will be a lot of table-pounding from both sides on this question.
“AI possibly has sociopath’s ability to say what is correct in the moment without actually feeling even a tiny bit invested in the statement.”