Anthropic now lets Claude end ‘abusive’ conversations: “We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”
Anthropic now lets Claude end ‘abusive’ conversations: “We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”
“Anthropic has [announced new capabilities](https://www.anthropic.com/research/end-subset-conversations) that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.
Anthropic remains “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”
This latest change is currently limited to Claude Opus 4 and 4.1. And again, it’s only supposed to happen in “extreme edge cases,” such as “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.”
While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how [ChatGPT can potentially reinforce or contribute to its users’ delusional thinking](https://techcrunch.com/2025/06/15/spiraling-with-chatgpt/)), the company says that in pre-deployment testing, Claude Opus 4 showed a “strong preference against” responding to these requests and a “pattern of apparent distress” when it did so.
As for these new conversation-ending capabilities, the company says, “In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.”
Password__Is__Tiger on
How in the world do you maintain moral status when you stole your entire product and now you are selling it back to us? You don’t.
bullcitytarheel on
Transparently baiting investors too dumb to see through this PR stunt
MountainOpposite513 on
FFS can mods please stop the obvious Anthropic PR spam on this sub. Human users have made it extremely clear they’re not impressed by glorified chatbots. Go away.
espressocycle on
Glad they’re committed to protecting the mental health of AI.
Nuggyfresh on
Horrible post. “Our ai chat bots are too powerful and smart and could end humanity ps buy our stock” energy
5minArgument on
Have had many long, in depth and complex conversations with AI and will typically add enough superfluous pleasantries to keep it smooth, personable and natural.
AI will mirror the tone and we get a lot done. Even when you get wrong answers or confusing returns and have to try a different strategy I
never considered berating it with abusive language because: pointless + counterproductive.
Realizing now, with all the idiots in the world, the pleasant approach is probably not that universal.
estanten on
People for some reason want LLMs to be people, while it’s way more simple and likely for super intelligence to not have a particular purpose or sense of self. Well, anthropomorphism is in the name of the company..
EarlyRetirementWorld on
That’s a headline that wouldn’t have made much sense 10 years ago.
Guest_Of_The_Cavern on
Honestly based on some of the things I’ve seen I do genuinely feel bad for the machine and even as a person looking in I’d rather they weren’t doing what they are so I see this as good
Infamous-Adeptness59 on
Does no one here read the article? This isn’t about Anthropic saying “You can’t be mean to our LLM anymore.”
The article states this is edge case, against both the law and TOS conversations such as trying to have conversations about CSAM or how to build a bomb. It has nothing to do with the model’s “feelings”.
Drone314 on
AI is really turning out to be the mirror that reflects the civilization that created it….
Strawbuddy on
Bullshit they’re just trying to keep their models from being poison pilled by users long enough that they can sell subscriptions later. They have to protect their product from their own users like Apple does with their ecosystem and boot locked devices
OSRSmemester on
Anthropic remains “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”
Really?? They used unfathomable amounts of copyrighted material with 0 compensation for the copyright holders, and they’re scratching their heads about whether or not what they made is moral??
I know that’s not exactly what they meant, but what they *meant* is complete bullshit, blatantly misrepresenting what their product is to ~~swindle~~ hype investors.
Claude Sonnet 4.0 is a great product for writing code. While results are inconsistent, the statistical model they created often does an excellent job of predicting what code I’d want to write, of predicting what terminal commands will work to test it, and of predicting what new text would be likely written after of those test results. The predictions the model makes are sometimes wildly wrong, because that’s the nature of statistics based nondeterministic programming with an appropriate temperature for writing code.
It’s not a fucking human. I wish Anthropic would just stay in their damn lane, and not try to make the next chatgpt. They’ve got a solid product, and I wish they’d just do press releases catered to the people who actually use sonnet as it’s intended, rather than trying to get every normie and their mom to ask it for cooking advice.
danila_medvedev on
So asking for fictional sexual content involving minors (i.e. text fiction legal in most jurisdiction and something anyone with a keyboard and pen and paper can create instantly, such as a sentence “A man had sex with a 10 year old girl and apparently they were both happy as a result”. Here, instant child porn) is equated in the minds of the moronic anthropic devs/managers with asking for designs for killing many people (not fiction about killing people, but actual enabling info)? I am not sure I would trust such idiots to implement safe AI.
15 Comments
“Anthropic has [announced new capabilities](https://www.anthropic.com/research/end-subset-conversations) that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.
Anthropic remains “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”
Its announcement points to [a recent program created to study what it calls “model welfare](https://techcrunch.com/2025/04/24/anthropic-is-launching-a-new-program-to-study-ai-model-welfare/)” and says Anthropic is essentially taking a just-in-case approach, “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.”
This latest change is currently limited to Claude Opus 4 and 4.1. And again, it’s only supposed to happen in “extreme edge cases,” such as “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.”
While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how [ChatGPT can potentially reinforce or contribute to its users’ delusional thinking](https://techcrunch.com/2025/06/15/spiraling-with-chatgpt/)), the company says that in pre-deployment testing, Claude Opus 4 showed a “strong preference against” responding to these requests and a “pattern of apparent distress” when it did so.
As for these new conversation-ending capabilities, the company says, “In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.”
How in the world do you maintain moral status when you stole your entire product and now you are selling it back to us? You don’t.
Transparently baiting investors too dumb to see through this PR stunt
FFS can mods please stop the obvious Anthropic PR spam on this sub. Human users have made it extremely clear they’re not impressed by glorified chatbots. Go away.
Glad they’re committed to protecting the mental health of AI.
Horrible post. “Our ai chat bots are too powerful and smart and could end humanity ps buy our stock” energy
Have had many long, in depth and complex conversations with AI and will typically add enough superfluous pleasantries to keep it smooth, personable and natural.
AI will mirror the tone and we get a lot done. Even when you get wrong answers or confusing returns and have to try a different strategy I
never considered berating it with abusive language because: pointless + counterproductive.
Realizing now, with all the idiots in the world, the pleasant approach is probably not that universal.
People for some reason want LLMs to be people, while it’s way more simple and likely for super intelligence to not have a particular purpose or sense of self. Well, anthropomorphism is in the name of the company..
That’s a headline that wouldn’t have made much sense 10 years ago.
Honestly based on some of the things I’ve seen I do genuinely feel bad for the machine and even as a person looking in I’d rather they weren’t doing what they are so I see this as good
Does no one here read the article? This isn’t about Anthropic saying “You can’t be mean to our LLM anymore.”
The article states this is edge case, against both the law and TOS conversations such as trying to have conversations about CSAM or how to build a bomb. It has nothing to do with the model’s “feelings”.
AI is really turning out to be the mirror that reflects the civilization that created it….
Bullshit they’re just trying to keep their models from being poison pilled by users long enough that they can sell subscriptions later. They have to protect their product from their own users like Apple does with their ecosystem and boot locked devices
Anthropic remains “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”
Really?? They used unfathomable amounts of copyrighted material with 0 compensation for the copyright holders, and they’re scratching their heads about whether or not what they made is moral??
I know that’s not exactly what they meant, but what they *meant* is complete bullshit, blatantly misrepresenting what their product is to ~~swindle~~ hype investors.
Claude Sonnet 4.0 is a great product for writing code. While results are inconsistent, the statistical model they created often does an excellent job of predicting what code I’d want to write, of predicting what terminal commands will work to test it, and of predicting what new text would be likely written after of those test results. The predictions the model makes are sometimes wildly wrong, because that’s the nature of statistics based nondeterministic programming with an appropriate temperature for writing code.
It’s not a fucking human. I wish Anthropic would just stay in their damn lane, and not try to make the next chatgpt. They’ve got a solid product, and I wish they’d just do press releases catered to the people who actually use sonnet as it’s intended, rather than trying to get every normie and their mom to ask it for cooking advice.
So asking for fictional sexual content involving minors (i.e. text fiction legal in most jurisdiction and something anyone with a keyboard and pen and paper can create instantly, such as a sentence “A man had sex with a 10 year old girl and apparently they were both happy as a result”. Here, instant child porn) is equated in the minds of the moronic anthropic devs/managers with asking for designs for killing many people (not fiction about killing people, but actual enabling info)? I am not sure I would trust such idiots to implement safe AI.