>Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people.
>Today, their tools are not being used as weapons, but AI is giving the Department of Defense a “significant advantage” in identifying, tracking, and assessing threats, the Pentagon’s Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.
>“We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces,” said Plumb.
>The “kill chain” refers to the military’s process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb.
>The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta [walked back their usage policies](https://techcrunch.com/2024/01/12/openai-changes-policy-to-allow-military-applications/) in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don’t allow their AI to harm humans.
Stunning-Chipmunk243 on
I really don’t like the direction this is headed in, just finished an article on how Russia and Ukraine are using AI drones now that can automatically identify and kamikaze attack enemy equipment, infrastructure, and soldiers without having to be individually controlled. The article stated one person can operate 5 drones at a minimum… minimum they said. Killing another human being is becoming much too easy and disconnected
H3rbert_K0rnfeld on
Terminator plot twist: SkyNet was never out of control. The culling was planned and executed according to plan. John Conner was a surprise to the plan.
Ronzok88 on
what if ai is already planning our downfall and all the crisis and vote results are already part of the plan?
;o half serious half crying
TomGNYC on
Great! If it keeps increasing at this rate, we could clean all people off of this planet in less than 50 years
GrinningPariah on
This kinda makes the “kill chain” sound scarier than it is.
The idea is just, a soldier sees an enemy tank. They need it gone. A few minutes later a jet drops a guided bomb on the tank. Success. But what was the chain of communication and decision-making which connected that soldier to that bomb? They’re different branches of the military, whose decision was it that target was a priority? How high up the chain did the request have to go? How long did it take? How did the soldier pass the tank’s location to the jet?
That’s all the kill chain is, the chain of information linking detection of a target to a kill.
pagenrider on
So, one must assume that the other world superpowers are not pondering the moral dilemma.
7 Comments
From the article
>Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people.
>Today, their tools are not being used as weapons, but AI is giving the Department of Defense a “significant advantage” in identifying, tracking, and assessing threats, the Pentagon’s Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.
>“We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces,” said Plumb.
>The “kill chain” refers to the military’s process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb.
>The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta [walked back their usage policies](https://techcrunch.com/2024/01/12/openai-changes-policy-to-allow-military-applications/) in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don’t allow their AI to harm humans.
I really don’t like the direction this is headed in, just finished an article on how Russia and Ukraine are using AI drones now that can automatically identify and kamikaze attack enemy equipment, infrastructure, and soldiers without having to be individually controlled. The article stated one person can operate 5 drones at a minimum… minimum they said. Killing another human being is becoming much too easy and disconnected
Terminator plot twist: SkyNet was never out of control. The culling was planned and executed according to plan. John Conner was a surprise to the plan.
what if ai is already planning our downfall and all the crisis and vote results are already part of the plan?
;o half serious half crying
Great! If it keeps increasing at this rate, we could clean all people off of this planet in less than 50 years
This kinda makes the “kill chain” sound scarier than it is.
The idea is just, a soldier sees an enemy tank. They need it gone. A few minutes later a jet drops a guided bomb on the tank. Success. But what was the chain of communication and decision-making which connected that soldier to that bomb? They’re different branches of the military, whose decision was it that target was a priority? How high up the chain did the request have to go? How long did it take? How did the soldier pass the tank’s location to the jet?
That’s all the kill chain is, the chain of information linking detection of a target to a kill.
So, one must assume that the other world superpowers are not pondering the moral dilemma.