Scoop: Pentagon takes first step toward blacklisting Anthropic

https://www.axios.com/2026/02/25/anthropic-pentagon-blacklist-claude

32 Comments

  1. Brilliant_Version344 on

    The Pentagon asked two major defense contractors on Wednesday to provide an assessment of their reliance on Anthropic’s AI model, Claude — a first step toward a potential designation of Anthropic as a “supply chain risk,” Axios has learned.

    Why it matters: That penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.

    Using it to punish a leading American tech firm, particularly one on which the military itself is currently reliant, would be unprecedented.
    Driving the news: The Pentagon reached out to Boeing and Lockheed Martin on Wednesday to ask about their exposure to Anthropic, two sources with knowledge of those conversations said.

    A Boeing spokesperson did not immediately respond to a request for comment.
    A Lockheed spokesperson confirmed the company was contacted by the Defense Department regarding an analysis of its exposure and reliance on Anthropic ahead of “a potential supply chain risk declaration.”

    The Pentagon plans to reach out to “all the traditional primes” — meaning the major contractors that supply things like fighter jets and weapons systems — about whether and how they use Claude, a source familiar told Axios.
    The big picture: Claude is currently the only AI model running in the military’s classified systems. It was used during the operation to capture Venezuela’s Nicolás Maduro, through Anthropic’s partnership with Palantir, and could foreseeably be used in a potential military campaign in Iran.

    The Pentagon is impressed with Claude’s performance, but furious that Anthropic has refused to lift its safeguards and let the military use it for “all lawful purposes.”

    Anthropic insists, in particular, on blocking Claude’s use for the mass surveillance of Americans or to develop weapons that fire without human involvement.

    The Pentagon insists it’s unworkable to have to clear individual use cases with Anthropic.
    Friction point: During a tense meeting on Tuesday, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline to agree to the Pentagon’s terms: 5:01pm on Friday.

    After that, Hegseth warned, the administration would either use the Defense Production Act to compel Anthropic to tailor its model to the military’s needs, or else declare the company a supply chain risk.

    While Anthropic could theoretically challenge it in court, invoking the DPA would let the military maintain access to Claude.
    Wednesday’s outreach suggests the military is leaning toward a supply chain risk designation.

    What they’re saying: An Anthropic spokesperson said the meeting between Amodei and Hegseth had been a continuation of the “good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”

    The spokesperson did not comment on the potential supply chain risk designation.

    The Pentagon told Axios it was “preparing to execute on any decision that the secretary might make on Friday regarding Anthropic.”
    Referring to the possible supply chain risk designation earlier this week, a senior Defense official told Axios: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
    Reality check: Asking suppliers to analyze their own reliance on Claude and report back to the Pentagon is a lot different than immediately forcing them to cut ties. It’s possible this is more brinksmanship on the Pentagon’s side to try to convince Anthropic to fold.

  2. I’m confused. What about the recent news that Anthropic has backed down and caved and lifted its restrictions ?

  3. > That penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.

    > Using it to punish a leading American tech firm, particularly one on which the military itself is currently reliant, would be unprecedented.

    I don’t care about Anthropic.

    I just don’t support this heinous abuse of power by Hegseth and the Trump administration.

    And most Americans are cool with this, either by being MAGA, or not caring enough to help vote against it.

    We’re surrounded by those fuckers.

  4. LetsJerkCircular on

    Honest question: can Anthropic sue? They’re not doing anything crazy, and it seems corrupt for the Pentagon to black list them and label them for having an ethical line in the sand.

  5. I’m cancelling my open-AI sub and moving to Claude. There’s a way to beat this scumbag administration after all.

  6. lilB0bbyTables on

    Ok at this point I would assume if the company maintains their contract with the government that they have agreed to lift the restrictions – regardless of whether they say so publicly or deny it. After the reveal of Room 641a it’s a safe bet to presume companies will bend to the will of government overreach when it comes to secret surveillance deals.

  7. So instead of assessing the long-term implications of AI, we’re taking money from Anthropic’s competitor(s) to blacklist it

  8. I barely trust Claude to write css… these morons are trying to use it for kill orders?!

    They know this is just an llm, right? Like, it’s not Skynet lol

  9. SuperSecretAgentMan on

    Hegseth bought puts at the beginning of the week, and now he’s cashing out and selling volatility options on the news he’s creating.

    Each and every one of Trump’s appointees should have their stock trades publicly listed in real time. It would show the world EXACTLY how corrupt these Nazi fucks all are.

  10. Intellectual_Dodo_7 on

    I’m proud of Anthropic for not immediately caving to ole Pete Kegsbreathe and handing Skynet the keys… but I hope they don’t cave to their investors.

  11. TheRatingsAgency on

    Maybe – just maybe…..it would be a good idea as a company to not bet on govt contracts and instead focus on commercial applications.

    The answer from Anthropic straight away should be “nope we aren’t doing that, cancel the contract and find another vendor, that’s fine with us”.

  12. Mother_Airline_6276 on

    I hope so. This shit is already more capable than we should be comfortable with. One of the main things that can make it worse is to put it in the hands of this admin.

  13. Anthropic refused for a reason. Now, someone else is going to do what Anthropic refused to do. 

  14. Hogsbreath is full of shit. Grok and OpenAI are willing to build AI for autonomous killing, he’s just forcing Athropic to get in line.

    Whatever happened to Palantir? Their AI decision-making tools are used in real-time targeting, battlefield analytics, and predictive kill zones.

    There are so many other companies willing to fulfill RFPs for what the Pentagon wants, why is Claude needed if they are not interested in changing their product for the DOD?