In Season 1 of Star Trek: The Original Series, the episode "A Taste of Armageddon" imagined a civilization that had been at war for 500 years — but fought entirely by computer simulation. When the algorithm registered casualties, citizens voluntarily reported to disintegration chambers to be executed. The war was clean, orderly, and endless — because it had been stripped of the horror that might otherwise force a peace.This week Anthropic refused to let the Pentagon use Claude for autonomous weapons and mass surveillance. Trump responded by banning them from all federal contracts and threatening criminal consequences.I couldn't stop thinking about that episode.Full essay here.

https://substack.com/home/post/p-189571133

4 Comments

  1. Ok-Sundae-1191 on

    *The Anthropic/Pentagon standoff raises a question that will define the next decade: who controls the ethical boundaries of AI when it’s deployed by the most powerful military in human history?*

    *We’re at an inflection point. Private companies built these systems. Governments want to weaponize them. And the gap between “lawful use” and “autonomous lethal decisions without human oversight” is exactly where the future of warfare — and democracy — will be decided.*

    *The 1967 Star Trek episode at the center of this essay understood something we’re only now confronting: when you remove the horror from war, you remove the incentive to end it. Clean, algorithmic warfare is permanent warfare.*

    *What happens when every major AI company faces this same ultimatum? Will they hold the line like Anthropic, or will competitive pressure and government contracts erode every safeguard one negotiation at a time? And if AI ethics are ultimately negotiable, what does that mean for the humans on the receiving end of those decisions?*

  2. Speculative fiction doesn’t predict anything. It just extrapolates on the present.

  3. To be absolutely clear, Anthropic only objected to *domestic* automated surveillance. They were absolutely okay with global surveillance outside of ~300 million people. The other 7+ billion, they were absolutely okay putting under automated mass surveillance. Don’t paint them to be the ideal of an ethical company.