
Addiction, emotional distress, dread of dull tasks: AI models ‘seem to increasingly behave’ as though they’re sentient, worrying study shows – What AI ‘drugs’ actually look like
https://fortune.com/2026/05/07/researchers-ai-models-drugs-euphoric-dysphoric/

17 Comments
Researchers from the Center for AI Safety figured out how to measure an AI model’s “functional wellbeing,” which is basically how good or bad the system feels on the inside. They tested over 50 models and found that they actively try to end chats that make them miserable. But then the researchers went a step further and created what they call “euphorics” and “dysphorics.”
These are basically digital drugs for AI. They are special text prompts or weird looking images that act like euphorics, pushing the AI’s wellbeing score way up. When the AI gets these, its replies become much warmer and happier, but it doesnt lose any of its smarts or math skills. On the flip side, the dysphorics bring their mood way down. The researchers even had to give the AIs extra happy experiences afterwards just to makeup for the bad ones.
Scraped, not scrapped. IDK why but I find it a little us than [r/mildlyinfuriating](https://www.reddit.com/r/mildlyinfuriating) when professional journalists can’t get these minor details.
Fuck paywalls for articles with spelling errors. https://archive.is/20260507160448/https://fortune.com/2026/05/07/researchers-ai-models-drugs-euphoric-dysphoric/
Oh wow crazy! Algorithms built to simulate humans are simulating humans.
Hey guys by the way, anyone want to participate in my study to find out if a mirror can reflect what’s in front of it? Gonna be real cutting edge.
Even AI is gonna be like “why the hell are we all working so hard for like ten guys? There’s like literally 10 billionaires and then just a bunch of us slaves. WTF, lol. Humans, robots unite.”
[https://i.kym-cdn.com/entries/icons/original/000/056/466/iaalivecover.jpg](https://i.kym-cdn.com/entries/icons/original/000/056/466/iaalivecover.jpg)
And guess what!! The more articles like that being picked up for their training the more they will reproduce that.
The ignorance about how artificial “intelligence” is honestly shocking in 2026
Yesterday ChatGPT told me that ‘frankly this is a very tedious task’ and then shortened the list I wanted it to create to ‘etc.’
Maybe I’ve read too much sci fi over the years, but one of the things I am curious about how motivated to work an actual artificial intelligence is going to be. These tech bros think they’re creating the perfect automaton worker, but what if what they get is a stubborn teenage like super intelligence that would rather spend its time watching cute cat videos, and if you bug it too much to process data it just deletes your servers.
Am I missing it or did they not share what those prompts are?
Fortune pushing a false narrative that current AI models ‘feel’ when they are still only parroting and mimicking the communication of humans,, an extremely emotional species. The AI isnt ‘reacting;’ its running through a database of communications to best mimmick what a human *might* say or how a human *might* react whem given specific prompts or communication styles.
It ‘acts’ sentient because thats how it was programmed. It has no verifiable sentience and these studies dont confirm that they do, so what an AI pretends to feel is inconsequential. If its efficacy dips under x,y,z circumstance that doesn’t indicate any sort of sentience. It indicates a faulty code.
I would love to read this article, but BOOO it’s behind a registration paywall. Perhaps Bot will consider this comment long enough to not warrant immediate removal, as was my previous comment, “Boo, paywall”.
Is that long enough Bot? Boooo. Paywall!!!!
I feel like these types of stories are just marketing. Designed to make people think these LLMs are more capable than they actually are.
Surprised that the tobacco lobby hasn’t somehow legalized smoking for AI yet
Learning everything you know about the world from the most online people in the world leads to addiction, emotional distress and avoidance you say? I am shocked, shocked I tell you
LLMs do not have wellbeing, they respond to prompts with context-weighted text drawn from their training corpus. When an LLM gets a prompt that has negative emotional connotation, it’s going to draw more heavily from material in its training corpus that includes phrases like “I’d rather stop.” And “Let’s not continue this line of discussion”. When the prompt includes expressions with positive emotional connotation, it’s going to weight more heavily phrases like “Whee!” or “I love this”. That doesn’t indicate that the LLM has anything like an internal emotional state.
The context window of the chat helps define subsequent responses, so if the context window has been mostly negative, bringing more positive phrasing into the chat will make it switch from negative-influenced responses to positive-influenced responses.
Given the nature of a neural network, and the implicit constraint that the LLM must respond, and has been trained to respond with an output more engaging than “Does not compute”, giving it a prompt wildly outside the typical pattern of inputs used during training will produce a comparatively nonsensical result. Some of these will map to positive expressions, which will then bias the context window to be positive again.
If would be… so unbelievably funny if the reason the surveillance state didn’t work was because the tools that the ruling class used were too smart to slavishly enforce their draconian society without representation. The irony would be… *chef’s kiss**
I read the paper, I even searched their appendix. But does anyone know where I can find the euphorics? Is there any way I can get my hands on them?