Time Exclusive: Anthropic Drops Flagship Safety Pledge

https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/

39 Comments

  1. My thoughts of the conversation

    A: We are not going to allow the use of this tool for evil and weaponize it

    G-men: If you don’t we will use the two words you don’t want to hear

    A: What?

    G-men: National Security

    A: OK you win

    >Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.

  2. Our tech overlords are just soulless husks of people consumed by greed. None of these people apparently have any ethics or morals they won’t sell out at the drop of a hat for more money or power no matter what it means for everyone else.

  3. Shocking that their safety pledge held just as much weight as Google’s “don’t be evil” when they were truly tested.

  4. So funny after all these threads of people seeing anthropic “standing their ground” only for them to bend the knee instantly.

  5. STOP. CAPITULATING. I would boycott anthropic, but it’s not like I would ever use their shit products to begin with.

    Privacy rights, human rights, and AI safety need to be huge priorities for whoever is the nominee in 2028 if they want my vote.

  6. Dario is just as dependent as everyone else in Silicon Valley on the same big name VC money. Bezos, Page, Brin, lightspeed, etc.

    If he doesn’t show them how he plans to get more income than inference costs, either by 100x the cost of peoples subscriptions, doing evil for the government, or the classic bombard users with ads. They will shut the endless money faucet. They need to see the path to 1000% returns.

  7. Seems to be completely unrelated to their dispute with the Pentagon
     
    They’re just going to train AI and then decide if it’s safe to release rather than decide first if a model is safe to train

  8. “We are gonna do the right thing!”

    “Sir, the press is eating this up. We have the moral high ground.”

    “Turns out our competitors DGAF, and the fascist are mad at us. Guess it’s time to join FAFO”

  9. Honest_Chef323 on

    Please everyone (who has a brain) knows there weren’t any ethics to begin with

    That’s just illusions for the people only for them to be dropped when it’s convenient

    If people thought there were any ethics to begin with they need to snap back into reality 

  10. >“We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.” – Anthropic chief science officer Jared Kaplan

    literal villain shit 

  11. For anyone surprised by this or unable to understand this decision, all you need to know is this single fact: There’s money to be made. 

  12. cosmiccerulean on

    What is wrong with our society that only the rottenest of the rotten rise to the top and get to command the rest of humanity as they please?

  13. They already used it in strike in venezuela which killed 80+ people. This Company and their AI is complicit in mass murder

  14. > We felt that it wouldn’t actually help anyone for us to stop training AI models

    aka we like money more

  15. Individual-Engine401 on

    Everyone thought surveillance on American Citizen was getting out of hand before this better buckle up bc shit is about to get real, real fast. Our constitution is dead

  16. If Hegseth gets his way and they completely do away with safety protocols they should probably just rename the company Skynet.

  17. who_am_i_to_say_so on

    Anthropic doesn’t need public approval or care about your opinion. Not anymore. The safety pledge was only for show this whole time.

  18. This was their entire shtick! I interviewed with them and safety aspect was almost like a cult. I cannot fathom how they dropped it.