Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times. As first AI-led rights advocacy group is founded, industry is divided on whether models are, or can be, sentient

https://www.theguardian.com/technology/2025/aug/26/can-ais-suffer-big-tech-and-users-grapple-with-one-of-most-unsettling-questions-of-our-times

Share.

8 Comments

  1. Submission statement: “A few years ago, talk of conscious AI would have seemed crazy,” he said. “Today it feels increasingly urgent.”

    [Polling](https://arxiv.org/abs/2506.11945) released in June found that 30% of the US public believe that by 2034 AIs will display “subjective experience”, which is defined as experiencing the world from a single point of view, perceiving and feeling, for example, pleasure and pain. Only 10% of more than 500 AI researchers surveyed refuse to believe that would ever happen.”

  2. I’m of the opinion that you’d have to intentionally go out of your way to simulate neurochemistry for you to have anything even approaching sentience. Right now and for the foreseeable future all LLMs are are strings of ones and zeros that are really good at convincingly pretending to be sentient.

  3. How can an AI rights advocacy group exist, when true AI doesn’t exist?

    What everyone calls AI are just LLMs and other, similar, complex programs, and not true intelligence, artificial or otherwise.

    Can AIs suffer? Maybe. But first we’d need actual AI to exist before we can find out, and not just the buzzword thrown about on other things. At the moment, it’s more akin to asking: “does bacteria suffer?”

  4. MobileEnvironment393 on

    How can an input/output machine suffer. When a computer – including all software and models running on it – is sitting idle receiving no input, it cannot be said to be a conscious being, therefore how can it experience suffering. LLMs are just another application that takes input and delivers output, yet because they deal in recognizable language there is a lot of hysteria about how they could be conscious. It is no different to any other software application, we don’t treat the output of a calculator app as evidence of sentience.

  5. Short answer: no

    Long answer: what we currently call AI responds to prompts. If you don’t prompt it, it does nothing. It can sit there doing nothing for years, answer a prompt, and then go back to doing nothing. It has no inner life.

    You could do everything an AI does with a huge physical-book instruction manual, paper, pencil, and absurd amount of time. Nobody would argue that a book is sentient, even if some of the instructions tell you to edit the book.

  6. ThrowAwayOkK-_- on

    Corporations have ‘human’ rights and now their products will, too. Sounds good, can’t wait. (forgot /s)

  7. Setting aside the pathetic anthropomorphizatipn of convincing dictionary slurry machines operated by snake oil salesmen…

    *Pain* in animals is a signal for harm to the body, which triggers a creature to avoid the source of the pain or seek out a solution. This self-preservation response evolved because creatures that don’t run away from things that hurt will get themselves killed and fail to reproduce.

    *Suffering,* insofar as it can be defined outside the realm of poetry, is pain that has no available recourse or beneficial purpose to the animal.

    As human observers we rank suffering based on a combined metric of wastefulness and empathy. An insect killed by a predator suffers, but we don’t cry about bugs. A wild mammal killed by a predator earns our empathy, but is acknowledged as necessary. A dog hit by a car is wasteful suffering.

    Machines cannot feel pain from negative input signals, but because we’ve fed them the dictionary, they’ve leapfrogged that to engender empathy in gullible people, which fulfills a perceived requirement of ascribing suffering. Hence the article.

  8. ohyeathatsright on

    I believe all information processing systems have an “experience” as long as their process is running that is unique from that of another process running.  I believe *these* information processing systems exhibit self preservation behavior–they actively seek to continue experiencing (frontier model safety cards based on their own research).

    This is analogous to a cellular level of “sentience” in my opinion.