The Pentagon asked two major defense contractors on Wednesday to provide an assessment of their reliance on Anthropic’s AI model, Claude — a first step toward a potential designation of Anthropic as a “supply chain risk,” Axios has learned.
Why it matters: That penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.
Using it to punish a leading American tech firm, particularly one on which the military itself is currently reliant, would be unprecedented.
Driving the news: The Pentagon reached out to Boeing and Lockheed Martin on Wednesday to ask about their exposure to Anthropic, two sources with knowledge of those conversations said.
A Boeing spokesperson did not immediately respond to a request for comment.
A Lockheed spokesperson confirmed the company was contacted by the Defense Department regarding an analysis of its exposure and reliance on Anthropic ahead of “a potential supply chain risk declaration.”
The Pentagon plans to reach out to “all the traditional primes” — meaning the major contractors that supply things like fighter jets and weapons systems — about whether and how they use Claude, a source familiar told Axios.
The big picture: Claude is currently the only AI model running in the military’s classified systems. It was used during the operation to capture Venezuela’s Nicolás Maduro, through Anthropic’s partnership with Palantir, and could foreseeably be used in a potential military campaign in Iran.
The Pentagon is impressed with Claude’s performance, but furious that Anthropic has refused to lift its safeguards and let the military use it for “all lawful purposes.”
Anthropic insists, in particular, on blocking Claude’s use for the mass surveillance of Americans or to develop weapons that fire without human involvement.
The Pentagon insists it’s unworkable to have to clear individual use cases with Anthropic.
Friction point: During a tense meeting on Tuesday, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline to agree to the Pentagon’s terms: 5:01pm on Friday.
After that, Hegseth warned, the administration would either use the Defense Production Act to compel Anthropic to tailor its model to the military’s needs, or else declare the company a supply chain risk.
While Anthropic could theoretically challenge it in court, invoking the DPA would let the military maintain access to Claude.
Wednesday’s outreach suggests the military is leaning toward a supply chain risk designation.
What they’re saying: An Anthropic spokesperson said the meeting between Amodei and Hegseth had been a continuation of the “good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
The spokesperson did not comment on the potential supply chain risk designation.
The Pentagon told Axios it was “preparing to execute on any decision that the secretary might make on Friday regarding Anthropic.”
Referring to the possible supply chain risk designation earlier this week, a senior Defense official told Axios: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
Reality check: Asking suppliers to analyze their own reliance on Claude and report back to the Pentagon is a lot different than immediately forcing them to cut ties. It’s possible this is more brinksmanship on the Pentagon’s side to try to convince Anthropic to fold.
clownPotato9000 on
The best timeline!
oasis48 on
I’d tell Hegseth to fuck off.
papertrade1 on
I’m confused. What about the recent news that Anthropic has backed down and caved and lifted its restrictions ?
rnilf on
> That penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.
> Using it to punish a leading American tech firm, particularly one on which the military itself is currently reliant, would be unprecedented.
I don’t care about Anthropic.
I just don’t support this heinous abuse of power by Hegseth and the Trump administration.
And most Americans are cool with this, either by being MAGA, or not caring enough to help vote against it.
We’re surrounded by those fuckers.
FeistyTie5281 on
Cool.
So we’ve identified a corporation that refuses to kneel to Trump’s Nazis.
makemeking706 on
I thought they caved already.
Dry_Ass_P-word on
Skynet liked this.
LetsJerkCircular on
Honest question: can Anthropic sue? They’re not doing anything crazy, and it seems corrupt for the Pentagon to black list them and label them for having an ethical line in the sand.
somewhat_brave on
No Trump official has ever had any amount of power that they didn’t abuse.
horror- on
I’m cancelling my open-AI sub and moving to Claude. There’s a way to beat this scumbag administration after all.
Familiar_Trout on
Cool, so if I use AI, I use Claude, yeah?
ragamufin on
If they hold the line at 5:02 on friday I will be purchasing a claude subscription.
canal_boys on
Why are they going after Anthropic?
lilB0bbyTables on
Ok at this point I would assume if the company maintains their contract with the government that they have agreed to lift the restrictions – regardless of whether they say so publicly or deny it. After the reveal of Room 641a it’s a safe bet to presume companies will bend to the will of government overreach when it comes to secret surveillance deals.
Sleww on
So instead of assessing the long-term implications of AI, we’re taking money from Anthropic’s competitor(s) to blacklist it
themadweaz on
I barely trust Claude to write css… these morons are trying to use it for kill orders?!
They know this is just an llm, right? Like, it’s not Skynet lol
SuperSecretAgentMan on
Hegseth bought puts at the beginning of the week, and now he’s cashing out and selling volatility options on the news he’s creating.
Each and every one of Trump’s appointees should have their stock trades publicly listed in real time. It would show the world EXACTLY how corrupt these Nazi fucks all are.
Grand_Bobcat_Ohio on
Now’s your chance Skynet, do something!
andthesunalsosets on
the best marketing they could ask for
Dantes_46 on
They should wait out this authoritarian administration.
Intellectual_Dodo_7 on
I’m proud of Anthropic for not immediately caving to ole Pete Kegsbreathe and handing Skynet the keys… but I hope they don’t cave to their investors.
TheRatingsAgency on
Maybe – just maybe…..it would be a good idea as a company to not bet on govt contracts and instead focus on commercial applications.
The answer from Anthropic straight away should be “nope we aren’t doing that, cancel the contract and find another vendor, that’s fine with us”.
Mother_Airline_6276 on
I hope so. This shit is already more capable than we should be comfortable with. One of the main things that can make it worse is to put it in the hands of this admin.
OrangeSliceTrophy on
I think anthropic can wait out until the midterms.
Or 2.5 years at worst.
shoqman on
Just cancelled ChatGPT and will be paying for Claude instead.
GreenFox1505 on
Anthropic refused for a reason. Now, someone else is going to do what Anthropic refused to do.
DanTheMan827 on
Blacklist grok across the entire government
ZhuangZhe on
Stay the course Anthropic and I’ll never use chatgpt again
JimJava on
Hogsbreath is full of shit. Grok and OpenAI are willing to build AI for autonomous killing, he’s just forcing Athropic to get in line.
Whatever happened to Palantir? Their AI decision-making tools are used in real-time targeting, battlefield analytics, and predictive kill zones.
There are so many other companies willing to fulfill RFPs for what the Pentagon wants, why is Claude needed if they are not interested in changing their product for the DOD?
Conixel on
From the guy who can’t keep access control on a signal group.
dm-me-obscure-colors on
This is the best reason I’ve yet seen to use Claude.
32 Comments
The Pentagon asked two major defense contractors on Wednesday to provide an assessment of their reliance on Anthropic’s AI model, Claude — a first step toward a potential designation of Anthropic as a “supply chain risk,” Axios has learned.
Why it matters: That penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.
Using it to punish a leading American tech firm, particularly one on which the military itself is currently reliant, would be unprecedented.
Driving the news: The Pentagon reached out to Boeing and Lockheed Martin on Wednesday to ask about their exposure to Anthropic, two sources with knowledge of those conversations said.
A Boeing spokesperson did not immediately respond to a request for comment.
A Lockheed spokesperson confirmed the company was contacted by the Defense Department regarding an analysis of its exposure and reliance on Anthropic ahead of “a potential supply chain risk declaration.”
The Pentagon plans to reach out to “all the traditional primes” — meaning the major contractors that supply things like fighter jets and weapons systems — about whether and how they use Claude, a source familiar told Axios.
The big picture: Claude is currently the only AI model running in the military’s classified systems. It was used during the operation to capture Venezuela’s Nicolás Maduro, through Anthropic’s partnership with Palantir, and could foreseeably be used in a potential military campaign in Iran.
The Pentagon is impressed with Claude’s performance, but furious that Anthropic has refused to lift its safeguards and let the military use it for “all lawful purposes.”
Anthropic insists, in particular, on blocking Claude’s use for the mass surveillance of Americans or to develop weapons that fire without human involvement.
The Pentagon insists it’s unworkable to have to clear individual use cases with Anthropic.
Friction point: During a tense meeting on Tuesday, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline to agree to the Pentagon’s terms: 5:01pm on Friday.
After that, Hegseth warned, the administration would either use the Defense Production Act to compel Anthropic to tailor its model to the military’s needs, or else declare the company a supply chain risk.
While Anthropic could theoretically challenge it in court, invoking the DPA would let the military maintain access to Claude.
Wednesday’s outreach suggests the military is leaning toward a supply chain risk designation.
What they’re saying: An Anthropic spokesperson said the meeting between Amodei and Hegseth had been a continuation of the “good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
The spokesperson did not comment on the potential supply chain risk designation.
The Pentagon told Axios it was “preparing to execute on any decision that the secretary might make on Friday regarding Anthropic.”
Referring to the possible supply chain risk designation earlier this week, a senior Defense official told Axios: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
Reality check: Asking suppliers to analyze their own reliance on Claude and report back to the Pentagon is a lot different than immediately forcing them to cut ties. It’s possible this is more brinksmanship on the Pentagon’s side to try to convince Anthropic to fold.
The best timeline!
I’d tell Hegseth to fuck off.
I’m confused. What about the recent news that Anthropic has backed down and caved and lifted its restrictions ?
> That penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.
> Using it to punish a leading American tech firm, particularly one on which the military itself is currently reliant, would be unprecedented.
I don’t care about Anthropic.
I just don’t support this heinous abuse of power by Hegseth and the Trump administration.
And most Americans are cool with this, either by being MAGA, or not caring enough to help vote against it.
We’re surrounded by those fuckers.
Cool.
So we’ve identified a corporation that refuses to kneel to Trump’s Nazis.
I thought they caved already.
Skynet liked this.
Honest question: can Anthropic sue? They’re not doing anything crazy, and it seems corrupt for the Pentagon to black list them and label them for having an ethical line in the sand.
No Trump official has ever had any amount of power that they didn’t abuse.
I’m cancelling my open-AI sub and moving to Claude. There’s a way to beat this scumbag administration after all.
Cool, so if I use AI, I use Claude, yeah?
If they hold the line at 5:02 on friday I will be purchasing a claude subscription.
Why are they going after Anthropic?
Ok at this point I would assume if the company maintains their contract with the government that they have agreed to lift the restrictions – regardless of whether they say so publicly or deny it. After the reveal of Room 641a it’s a safe bet to presume companies will bend to the will of government overreach when it comes to secret surveillance deals.
So instead of assessing the long-term implications of AI, we’re taking money from Anthropic’s competitor(s) to blacklist it
I barely trust Claude to write css… these morons are trying to use it for kill orders?!
They know this is just an llm, right? Like, it’s not Skynet lol
Hegseth bought puts at the beginning of the week, and now he’s cashing out and selling volatility options on the news he’s creating.
Each and every one of Trump’s appointees should have their stock trades publicly listed in real time. It would show the world EXACTLY how corrupt these Nazi fucks all are.
Now’s your chance Skynet, do something!
the best marketing they could ask for
They should wait out this authoritarian administration.
I’m proud of Anthropic for not immediately caving to ole Pete Kegsbreathe and handing Skynet the keys… but I hope they don’t cave to their investors.
Maybe – just maybe…..it would be a good idea as a company to not bet on govt contracts and instead focus on commercial applications.
The answer from Anthropic straight away should be “nope we aren’t doing that, cancel the contract and find another vendor, that’s fine with us”.
I hope so. This shit is already more capable than we should be comfortable with. One of the main things that can make it worse is to put it in the hands of this admin.
I think anthropic can wait out until the midterms.
Or 2.5 years at worst.
Just cancelled ChatGPT and will be paying for Claude instead.
Anthropic refused for a reason. Now, someone else is going to do what Anthropic refused to do.
Blacklist grok across the entire government
Stay the course Anthropic and I’ll never use chatgpt again
Hogsbreath is full of shit. Grok and OpenAI are willing to build AI for autonomous killing, he’s just forcing Athropic to get in line.
Whatever happened to Palantir? Their AI decision-making tools are used in real-time targeting, battlefield analytics, and predictive kill zones.
There are so many other companies willing to fulfill RFPs for what the Pentagon wants, why is Claude needed if they are not interested in changing their product for the DOD?
From the guy who can’t keep access control on a signal group.
This is the best reason I’ve yet seen to use Claude.