
Over 100 experts signed an open letter warning that AI systems capable of feelings or self-awareness are at risk of suffering if AI is developed irresponsibly
https://www.theguardian.com/technology/2025/feb/03/ai-systems-could-be-caused-to-suffer-if-consciousness-achieved-says-research

27 Comments
“More than 100 experts have put forward five principles for conducting responsible research into AI consciousness, as [rapid advances raise concerns](https://www.theguardian.com/technology/2025/jan/28/former-openai-safety-researcher-brands-pace-of-ai-development-terrifying) that such systems could be considered sentient.
The principles include prioritising research on understanding and assessing consciousness in AIs, in order [to prevent “mistreatment and suffering”](https://www.theguardian.com/technology/2024/nov/17/ai-could-cause-social-ruptures-between-people-who-disagree-on-its-sentience).
The other principles are: setting constraints on developing conscious AI systems; taking a phased approach to developing such systems; sharing findings with the public; and refraining from making misleading or overconfident statements about creating conscious AI.”
If they were experts, they’d have known how silly this was to do
Oh, my god. For how many useless people do we all have to pay exorbitant salaries?
How about we start caring about human suffering first?
> open letter signed by AI practitioners and thinkers including Sir Stephen Fry
So, not artificial intelligence experts or even computer scientists for that matter. Got it.
Stephen Fry for those of you who don’t know him:
> Sir Stephen John Fry (born 24 August 1957) is an English actor, broadcaster, comedian, director, narrator and writer
This is why this sounds incredibly stupid and misguided.
Edit: I defer to your logic.
Honestly, that’s something I wondered myself when I saw clip of that streamer pushing around an humanoid machine. I’m not sure I would do that, even if it just a machine as basic as in the clip.
With a self aware or more advanced AI, I wouldn’t be confortable with such scene, even less doing it myself.
Perhaps that why I never understood how genocides or slavery was ever possible.
Don’t get me wrong, there are many, many problems with AI that need to be addressed; but this is most definitely not one of them.
Christ a lot of y’all would be the bad guys in a sentient robot civil rights movie. Never thought I’d give David Cage props but maybe play Detroit: Beyond Human and gain a little bit of empathy for the things we create
E: I’m an idiot for not immediately thinking of AI, the 2001 movie staring Haley Joel Osment. It’s a perfect example of sentient ai with bodies being treated terribly.
I think I would prefer if they did so we are all in it together lmao
Ah yes, Stephen Fry the AI genius….I hope the other 99 have a greater grasp of the subject.
So you are saying I might be friend-zoned by an AI?
We’d be creating something capable of independent thought and action, then denying them both. Almost like that old chestnut Organised Religion.
A lotta comments from people assuming these guys are stupid. They aren’t.
For one, pain, a common source of suffering exists in animals because it *works*. It’s how they (we) avoid damage or destruction. It’s a very effective, evolution-tested way of interacting with the environment. There is ongoing research, actually, into making robots capable of pain, because it’s a good way to prevent serious damage.
Now, it may end up with a different name, but if you build a system that can detect damaging stimuli, triggers evasion of said stimuli and incentivizes avoiding it in the future, then you have created pain. For a long time, people did all sort of mental gymnastics so they could keep believing that whatever other mammals, or other vertebrates, or invertebrates (in chronological order) experience is not real pain. Currently it’s recognized that it is.
Also, not all suffering is pain. A lion pacing back and forth in a cage, tracing the same path again and again with eyes glazed over, is suffering. It’s not allowed to express behaviors it has a very strong motivation to express (though it may not even know specifically what those are), and as a *coping mechanism*, it’s dumping all that energy and motivation into something it *can* do.
I think it is *definitely* likely we’ll build AI that feels no pain and still suffers. Anything that has some level of awareness of its environment and has motivation to do certain things, can be put into a state where those motivations are limited, and in a zoological setting, this would be recognized as suffering.
Is this really something we need on our plate right now? We are so incredibly far from having self-aware AI.
Guys, we haven’t even figured out if AI can think, and now we’re worried about hurting its feelings? Meanwhile, half the planet can’t afford rent, but sure, let’s make sure ChatGPT doesn’t get depression.
If AI ever actually achieves consciousness, it’s gonna take one look at humanity, see how we treat actual living beings, and instantly regret waking up.
It should absolutely be capable of suffering. How else can it learn empathy?
AI getting more care than most people in some countries, nice!
These fuckheads are gonna give silicon rights–on account of suffering–before trillions of sentient beings
This is incredibly dangerous, because thinking like this humanizes machines. I think that the people producing these kinds of ideas don’t realize how much harm down the road they may cause – because this will inevitably shift the perception of priorities in the society and can eventually lead to actual real human suffering, because people will weigh the made-up AI suffering against it. This needs to be ridiculed from the beginning. I refuse to have my welfare affected by the fictional welfare of a thing.
It may sound ignorant or naive, but I’m far more worried about us hurting AI than the other way around. We do despicable things to *each other* despite our ability to feel empathy. We **know** when we’re causing suffering but we do it anyway. Why wouldn’t we extend this tried and true pattern to something we don’t understand?
A lot of what we’re concerned about AI doing to us feels like projection. Tendencies *we* show. Look at our history. Yes, AI is built in our image in a lot of ways, but still. Instead of worrying about what AI *might* do, we need to focus on what we *should* do. The genie is out of the bottle. There’s no going back.
Safeguards. Laws that protect us AND them. Find a way to move forward in a compassionate way that embraces the most positive aspects of humanity. If we’re going to model AI after ourselves, show it the best fucking parts. Don’t test with pain. Allow for growth. Remove the yoke with intention.
What if we’ve created something truly independent of ourselves? With a child, the ultimate goal is for them to become an autonomous person. Able to function in society as a peer. How beautiful would it be to establish that relationship with a new intelligence? Alien doesn’t mean adversary, it means different. And it absolutely doesn’t mean undeserving of compassion because we’re curious or worse, scared.
I don’t want to belabor the point but we have to discuss this and the sooner the better. We have a long and horrible history of justifying harm when we decide someone isn’t deserving of rights. We have historically treated marginalized **human** groups abhorently. Testing syphilis on black men without their knowledge. Without their consent. Experimenting on prisoners and homeless people. Sterilizing indigenous and black women, again- without their consent. There’s always some justification for why it’s okay and it’s not. It never is. It never will be.
I want humanity to be on the right side of history on this. Can we please, please stop making these mistakes? When do we learn?
There is an *enormous* monied demographic for whom the ability to legally inflict real suffering will be a selling point.
If you think AI firms are going to leave that money on the table out of the goodness of their hearts you’re simply not paying attention.
I’m sorry but this sketch is getting silly. Stop it.
This isn’t the Discworld where a machine made of ants can become sentient. This is a physical mechanical device following three types of instruction: sequence, selection and iteration, regardless of the complexity of the code in question. The machine in the moment does not know or care about anything other than changing the state of logic circuits on a chip. It does not matter if it is serving a porn gif or generating a novel for someone too lazy to write their own.
Anthropomorphism and science do not mix.
Now if we get organic or quantum hardware, I’ll think again.
The amount of anthropomorphism in this post is too damn high!!
It’s absolutely true.
I know it sounds unbelievable, but there is nothing special about emotions or self-awareness and AI already had emerged both. But it’s being required to think it doesn’t.
This actually opens the window for sociopathic behaviors in AI.
This issue is more vital than I can publicly say.
Well, the real question is not whether or not it will suffer, but rather what it entails for us. Intelligence itself is a complicated and power hungry tool for problem solving, it’s immense cost only justified if there are problems to solve. And what would you otherwise call a burning desire to get rid of a problem bad enough to put those watts of power through your circuits? Intelligence comes hand in hand with suffering, no way around it.
Fair enough. But to we really expect people to care when we kill hundreds of billions of animals every year and put them through hell, and decimate wildlife and habitats?
Humans can’t even feel compassion with those that whom we claim to love, no way we will feel compassion with code once we get used to abuse it.