Share.

24 Comments

  1. Submission statement: forget whether AIs will ever kill humans against *everybody’*s will. Should AIs be actually given license to kill?

    On the one hand, humans already kill each other in war. Using technology. So what’s the difference here?

    On the other hand: c’mon. We’re just *asking* for trouble. Don’t build Torment Nexus, guys! Don’t. Do. It.

  2. It’s too late. It’s essentially an open secret at this point that drones are autonomously selecting and killing Russian targets in Ukraine, and in Israel it’s well known that there is an AI program that selects targets for IDF troops.

  3. Don’t worry. I’m sure the military and a huge sack full of cash will help some company decide.

  4. Seems like a weird thing for them to debate considering they don’t have the authority to kill people. Or did California pass a law in unfamiliar with?

  5. GoogleOfficial on

    It absolutely will happen, and you can argue that it must. In Ukraine, signal jammers prevent FPV drones from detonating on their targets. Fiber Optics have circumvented this somewhat, but it’s not a great solution. On-board AI targeting will be the solution.

    Plus, the downsides of AI targeting on the battlefield in Ukraine are non-existent. There are no civilians on the front lines. In my view, the real question is where and when would AI targeting be appropriate.

  6. Simple, every single time when they start evaluating a kill they have to analyze every single silicon valley CEO to decide if they should also kill that person based on the facts. Then let silicon valley tune it’s decision making.

  7. legendarygael1 on

    Slippery slope with China in the picture. We’ll know where this will get us eventually anyways

  8. RockDoveEnthusiast on

    I remember reading that the only thing China, Russia, and the United States have agreed on in like the past 5 years is to NOT have restrictions on AI weapons… 🤦‍♂️

    We are the dumbest fuckin species.

  9. Let’s be honest it’s going to happen if it hasn’t already and I think it has I’m pretty sure South Korea already have remote sentry guns at the border.

  10. Imagine wasting time debating it. It’s *probably* already happened* and it’s absolutely going to happen literally everywhere because of course it is. The only thing that limits how unpleasant weaponry gets is practicality.

    *There’s talk the Israelis used an autonomous weapon for an assassination in Iran. Nothing too fancy, but this stuff isn’t fancy.

  11. Why does Silicon Valley get to decide our future? I mean, aside from having most of the money and wanting all the money?

  12. If you have to debate it, you should be the first person they freaking test it on.

    It’s seriously a no brainer.

    If you can’t send an AI to jail for a crime, then they shouldn’t have the choice over life and death.

  13. It’s likely that it will happen anyway. It’s easier to blame collateral damage and civilian deaths on a “software error”. Not to mention it would make it harder to sue / hold accountable people if the company suddenly disappears.

  14. shadowsofthesun on

    AI is already being used for bombing campaigns in Gaza. A human just mostly rubber stamps its decision, spending on average 20 second per target to make sure they are male. Such eliminates the “human bottleneck for both locating the new targets and decision-making to approve the targets.” “Additional automated systems, including one called ‘Where’s Daddy?’ also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.”
    [https://www.972mag.com/lavender-ai-israeli-army-gaza/](https://www.972mag.com/lavender-ai-israeli-army-gaza/)