Submission statement: The Pentagon has signed a deal with AI company Scale AI, in an initiative it’s calling “Thunderforge,” to use AI agents for military planning and operations.
“…to use AI agents for military planning and operations.”
If this works as well as using AI for legal cases (where the AI cited cases that don’t exist), we’ll end up invading Antarctica.
alexanderpas on
How is this essentially any different from an AI opponent giving orders in a computer game and having another AI move the units based on those orders?
Arkmer on
Considering CamoGPT is probably the worst AI I’ve ever used, I think a lot could go wrong. Even if they’re using a different model, it’s not a good look that their other model is complete garbage.
Unrigg3D on
AI can’t even apply my schedule and alarms properly. Good luck to them.
Rott3Y on
If this was Biden, y’all would be jizzing your pants with how cool this is…
6 Comments
Submission statement: The Pentagon has signed a deal with AI company Scale AI, in an initiative it’s calling “Thunderforge,” to use AI agents for military planning and operations.
Yet the encroachment of AI tech within the military has been unmistakable. Both [Google](https://www.theregister.com/2025/02/05/google_ai_principles_update/) and %5BOpenAI%5D(https://futurism.com/google-quietly-promise-ai-evil) have walked back rules forbidding the use of their AI tech for weapons development and surveillance, showing that Silicon Valley is opening up to the idea of having its tools be used by the military.
Just last month, a senior Pentagon official [told *Defense One*](https://www.defenseone.com/technology/2025/02/pentagon-may-break-tech-offices-acquisition-policy-shift/403167/) that the US military was looking to move away from funding research on the topic of autonomous killer robots and investing in [actual AI-powered weaponry](https://futurism.com/the-byte/openai-cuts-off-chatgpt-robot-rifle) instead.
And it goes beyond the Pentagon. Late last year, OpenAI also [announced](https://www.anduril.com/article/anduril-partners-with-openai-to-advance-u-s-artificial-intelligence-leadership-and-protect-u-s/) a partnership with Palmer Luckey’s defense tech company Anduril to focus on “improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.”
“…to use AI agents for military planning and operations.”
If this works as well as using AI for legal cases (where the AI cited cases that don’t exist), we’ll end up invading Antarctica.
How is this essentially any different from an AI opponent giving orders in a computer game and having another AI move the units based on those orders?
Considering CamoGPT is probably the worst AI I’ve ever used, I think a lot could go wrong. Even if they’re using a different model, it’s not a good look that their other model is complete garbage.
AI can’t even apply my schedule and alarms properly. Good luck to them.
If this was Biden, y’all would be jizzing your pants with how cool this is…