We don’t have to have unsupervised killer robots

https://www.theverge.com/ai-artificial-intelligence/885963/anthropic-dod-pentagon-tech-workers-ai-labs-react

4 Comments

  1. “It’s the day of the Pentagon’s looming ultimatum for Anthropic: allow the US military [unchecked access](https://www.theverge.com/ai-artificial-intelligence/883456/anthropic-pentagon-department-of-defense-negotiations) to its technology, including for mass surveillance and fully autonomous lethal weapons, or potentially be designated a “supply chain risk” and potentially lose hundreds of billions of dollars in contracts. Amid the intensifying public statements and threats, tech workers across the industry are looking at their own companies’ government and military contracts, wondering what kind of future they’re helping to build.

    While the Department of Defense has spent weeks negotiating with Anthropic over removing its guardrails, including allowing the US military to use Anthropic’s AI kill targets with no human oversight, OpenAI and xAI had [reportedly](https://www.washingtonpost.com/technology/2026/02/22/pentagon-anthropic-ai-dispute/) already agreed to such terms, although OpenAI is [reportedly](https://www.washingtonpost.com/technology/2026/02/22/pentagon-anthropic-ai-dispute/) attempting to adopt the same red lines in the agreements as Anthropic. The overall situation has left employees at some companies with defense contracts feeling betrayed. “When I joined the tech industry, I thought tech was about making people’s lives easier,” an Amazon Web Services employee told The Verge, “but now it seems like it’s all about making it easier to surveil and deport and kill people.”

  2. Treskelion2021 on

    “Thou shalt not make a machine in the likeness of a man’s mind” – Frank Herbert via Dune. One of the commandments to emerge out of the Butlerian Jihad. I know it was fiction was but this can easily be the path humanity sets itself on without strict guardrails for AI.

  3. The only way to not end up with unsupervised killer robots that turn on us is to fully supervise AI execs and engineers with access. Total surveillance, but by the public, not US government. The technology exists. Palantir is selling it as a service right now.

    Solidarity won’t solve shit.

  4. The unfortunate reality is, we can choose not to do this, but our enemies will probably not choose to do the same thing, so unless you wanna create an asymmetric disadvantage, we sort of have to