Submission statement: forget whether AIs will ever kill humans against *everybody’*s will. Should AIs be actually given license to kill?
On the one hand, humans already kill each other in war. Using technology. So what’s the difference here?
On the other hand: c’mon. We’re just *asking* for trouble. Don’t build Torment Nexus, guys! Don’t. Do. It.
Epicycler on
It’s too late. It’s essentially an open secret at this point that drones are autonomously selecting and killing Russian targets in Ukraine, and in Israel it’s well known that there is an AI program that selects targets for IDF troops.
superbirdbot on
Man, don’t do this. Have we learned nothing from Terminator?
hyrule5 on
Maybe AI should learn how to draw fingers correctly first
mrinterweb on
Don’t worry. I’m sure the military and a huge sack full of cash will help some company decide.
Hodr on
Seems like a weird thing for them to debate considering they don’t have the authority to kill people. Or did California pass a law in unfamiliar with?
wilczek24 on
It’s more of a question of how long until someone does it anyway.
Swallagoon on
Ah, yes, Palmer Luckey, the mentally insane entrepreneur. Cool.
GoogleOfficial on
It absolutely will happen, and you can argue that it must. In Ukraine, signal jammers prevent FPV drones from detonating on their targets. Fiber Optics have circumvented this somewhat, but it’s not a great solution. On-board AI targeting will be the solution.
Plus, the downsides of AI targeting on the battlefield in Ukraine are non-existent. There are no civilians on the front lines. In my view, the real question is where and when would AI targeting be appropriate.
Wipperwill1 on
As if slowly taking all our jobs and grinding us down into abject poverty is ok?
blaktronium on
Simple, every single time when they start evaluating a kill they have to analyze every single silicon valley CEO to decide if they should also kill that person based on the facts. Then let silicon valley tune it’s decision making.
WhiskeyKid33 on
It’s going to happen, not if. Only a matter of time.
legendarygael1 on
Slippery slope with China in the picture. We’ll know where this will get us eventually anyways
RockDoveEnthusiast on
I remember reading that the only thing China, Russia, and the United States have agreed on in like the past 5 years is to NOT have restrictions on AI weapons… 🤦♂️
We are the dumbest fuckin species.
Getafix69 on
Let’s be honest it’s going to happen if it hasn’t already and I think it has I’m pretty sure South Korea already have remote sentry guns at the border.
H0vis on
Imagine wasting time debating it. It’s *probably* already happened* and it’s absolutely going to happen literally everywhere because of course it is. The only thing that limits how unpleasant weaponry gets is practicality.
*There’s talk the Israelis used an autonomous weapon for an assassination in Iran. Nothing too fancy, but this stuff isn’t fancy.
AppropriateScience71 on
We’re already extremely close to militaries actively using AI to kill people:
>its influence on the military’s operations was such that they essentially treated the outputs of the AI machine “as if it were a human decision.”
chriswei2k on
Why does Silicon Valley get to decide our future? I mean, aside from having most of the money and wanting all the money?
therinwhitten on
If you have to debate it, you should be the first person they freaking test it on.
It’s seriously a no brainer.
If you can’t send an AI to jail for a crime, then they shouldn’t have the choice over life and death.
Allaun on
It’s likely that it will happen anyway. It’s easier to blame collateral damage and civilian deaths on a “software error”. Not to mention it would make it harder to sue / hold accountable people if the company suddenly disappears.
shadowsofthesun on
AI is already being used for bombing campaigns in Gaza. A human just mostly rubber stamps its decision, spending on average 20 second per target to make sure they are male. Such eliminates the “human bottleneck for both locating the new targets and decision-making to approve the targets.” “Additional automated systems, including one called ‘Where’s Daddy?’ also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.”
[https://www.972mag.com/lavender-ai-israeli-army-gaza/](https://www.972mag.com/lavender-ai-israeli-army-gaza/)
Boaroboros on
The chinese will make this decision for you anyways..
0010100101001 on
They are already being used. Why we having a conversation years later
thejackulator9000 on
Why are We the People allowing Silicon Valley to decide what to allow AI to do?
24 Comments
Submission statement: forget whether AIs will ever kill humans against *everybody’*s will. Should AIs be actually given license to kill?
On the one hand, humans already kill each other in war. Using technology. So what’s the difference here?
On the other hand: c’mon. We’re just *asking* for trouble. Don’t build Torment Nexus, guys! Don’t. Do. It.
It’s too late. It’s essentially an open secret at this point that drones are autonomously selecting and killing Russian targets in Ukraine, and in Israel it’s well known that there is an AI program that selects targets for IDF troops.
Man, don’t do this. Have we learned nothing from Terminator?
Maybe AI should learn how to draw fingers correctly first
Don’t worry. I’m sure the military and a huge sack full of cash will help some company decide.
Seems like a weird thing for them to debate considering they don’t have the authority to kill people. Or did California pass a law in unfamiliar with?
It’s more of a question of how long until someone does it anyway.
Ah, yes, Palmer Luckey, the mentally insane entrepreneur. Cool.
It absolutely will happen, and you can argue that it must. In Ukraine, signal jammers prevent FPV drones from detonating on their targets. Fiber Optics have circumvented this somewhat, but it’s not a great solution. On-board AI targeting will be the solution.
Plus, the downsides of AI targeting on the battlefield in Ukraine are non-existent. There are no civilians on the front lines. In my view, the real question is where and when would AI targeting be appropriate.
As if slowly taking all our jobs and grinding us down into abject poverty is ok?
Simple, every single time when they start evaluating a kill they have to analyze every single silicon valley CEO to decide if they should also kill that person based on the facts. Then let silicon valley tune it’s decision making.
It’s going to happen, not if. Only a matter of time.
Slippery slope with China in the picture. We’ll know where this will get us eventually anyways
I remember reading that the only thing China, Russia, and the United States have agreed on in like the past 5 years is to NOT have restrictions on AI weapons… 🤦♂️
We are the dumbest fuckin species.
Let’s be honest it’s going to happen if it hasn’t already and I think it has I’m pretty sure South Korea already have remote sentry guns at the border.
Imagine wasting time debating it. It’s *probably* already happened* and it’s absolutely going to happen literally everywhere because of course it is. The only thing that limits how unpleasant weaponry gets is practicality.
*There’s talk the Israelis used an autonomous weapon for an assassination in Iran. Nothing too fancy, but this stuff isn’t fancy.
We’re already extremely close to militaries actively using AI to kill people:
https://www.972mag.com/lavender-ai-israeli-army-gaza/
Per the article:
>its influence on the military’s operations was such that they essentially treated the outputs of the AI machine “as if it were a human decision.”
Why does Silicon Valley get to decide our future? I mean, aside from having most of the money and wanting all the money?
If you have to debate it, you should be the first person they freaking test it on.
It’s seriously a no brainer.
If you can’t send an AI to jail for a crime, then they shouldn’t have the choice over life and death.
It’s likely that it will happen anyway. It’s easier to blame collateral damage and civilian deaths on a “software error”. Not to mention it would make it harder to sue / hold accountable people if the company suddenly disappears.
AI is already being used for bombing campaigns in Gaza. A human just mostly rubber stamps its decision, spending on average 20 second per target to make sure they are male. Such eliminates the “human bottleneck for both locating the new targets and decision-making to approve the targets.” “Additional automated systems, including one called ‘Where’s Daddy?’ also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.”
[https://www.972mag.com/lavender-ai-israeli-army-gaza/](https://www.972mag.com/lavender-ai-israeli-army-gaza/)
The chinese will make this decision for you anyways..
They are already being used. Why we having a conversation years later
Why are We the People allowing Silicon Valley to decide what to allow AI to do?