From the article: Google has come a long way since its early days when “Don’t be evil” was its guiding principle. This departure has been duly noted before for various reasons. In its latest departure from its original ethos, the company has quietly removed a key passage from its AI principles that previously committed to avoiding the use of AI in potentially harmful applications, including weapons.
This change, first noticed by Bloomberg, marks a shift from the company’s earlier stance on responsible AI development.
The now-deleted section titled “AI applications we will not pursue” had explicitly stated that Google would refrain from developing technologies “that cause or are likely to cause overall harm,” with weapons being a specific example.
In response to inquiries about the change, Google pointed to a blog post published by James Manyika, a senior vice president at Google, and Demis Hassabis, who leads Google DeepMind.
The post said that democracies should lead AI development, guided by core values such as freedom, equality, and respect for human rights. It also called for collaboration among companies, governments, and organizations sharing these values to create AI that protects people, promotes global growth, and supports national security.
x0x-babe on
Just when you think it couldn’t get any worse… WHAT DA HELL
GISP on
Not realy a surprise, they removed the “do no evil” part not to long ago after all.
horror- on
Honestly, this was inevitable. We live in a world where hobbyists can build autonomous auto turrets in their living room and children fly remote fixed wing aircraft for fun. DARPA aint sitting this one out.
DataKnotsDesks on
Instead of “Don’t be evil”, perhaps Google ought to lead on this policy change. A pretty catchy strapline could be, “Evil is our business”.
Amaruk-Corvus on
>Google abandons ‘do no harm’ AI stance, opens door to military weapons | Shift in AI policy sparks concerns over potential military applications
Now I understand Nancy Pelosi ‘s purchase of Google stock earlier this year. Effer new something about this.
Konzeza on
Got to keep up with China. If we learned anything from history is the moment you show weakness someone will try and take over.
elfmere on
For the military today for the corporation security defence force tomorrow.
CrypticNebular on
Google has long abandoned the “Don’t Be Evil” mantra. It’s always happy clappy marketing b/s with these organisations.
They’re all just like Mom’s Friendly Robot Company.
Diligent-Mongoose135 on
Brief answers to the big questions by Stephen hawking is a great book. In the first few chapters he talks about the future of humanity, because our bodies can’t survive in space to travel the distances needed.
He describes the idea of biological hacking as a certain eventuality.
Edit: hit post too soon! Lol
Continued: all it takes is one scientist to inject themselves with their concoction and all the laws go right out the window.
Same thing is true here….. China and Russia hate America. Should the US fight with one hand behind the back? Or should we get XI and Putin’s pinky promise that they are really good guys and trust they would never develop any combat based AI? Lol come on.
bamboob on
Next stop: “Google abandon ‘no AI weapons use on American citizens’ ” stance
11 Comments
From the article: Google has come a long way since its early days when “Don’t be evil” was its guiding principle. This departure has been duly noted before for various reasons. In its latest departure from its original ethos, the company has quietly removed a key passage from its AI principles that previously committed to avoiding the use of AI in potentially harmful applications, including weapons.
This change, first noticed by Bloomberg, marks a shift from the company’s earlier stance on responsible AI development.
The now-deleted section titled “AI applications we will not pursue” had explicitly stated that Google would refrain from developing technologies “that cause or are likely to cause overall harm,” with weapons being a specific example.
In response to inquiries about the change, Google pointed to a blog post published by James Manyika, a senior vice president at Google, and Demis Hassabis, who leads Google DeepMind.
The post said that democracies should lead AI development, guided by core values such as freedom, equality, and respect for human rights. It also called for collaboration among companies, governments, and organizations sharing these values to create AI that protects people, promotes global growth, and supports national security.
Just when you think it couldn’t get any worse… WHAT DA HELL
Not realy a surprise, they removed the “do no evil” part not to long ago after all.
Honestly, this was inevitable. We live in a world where hobbyists can build autonomous auto turrets in their living room and children fly remote fixed wing aircraft for fun. DARPA aint sitting this one out.
Instead of “Don’t be evil”, perhaps Google ought to lead on this policy change. A pretty catchy strapline could be, “Evil is our business”.
>Google abandons ‘do no harm’ AI stance, opens door to military weapons | Shift in AI policy sparks concerns over potential military applications
Now I understand Nancy Pelosi ‘s purchase of Google stock earlier this year. Effer new something about this.
Got to keep up with China. If we learned anything from history is the moment you show weakness someone will try and take over.
For the military today for the corporation security defence force tomorrow.
Google has long abandoned the “Don’t Be Evil” mantra. It’s always happy clappy marketing b/s with these organisations.
They’re all just like Mom’s Friendly Robot Company.
Brief answers to the big questions by Stephen hawking is a great book. In the first few chapters he talks about the future of humanity, because our bodies can’t survive in space to travel the distances needed.
He describes the idea of biological hacking as a certain eventuality.
Edit: hit post too soon! Lol
Continued: all it takes is one scientist to inject themselves with their concoction and all the laws go right out the window.
Same thing is true here….. China and Russia hate America. Should the US fight with one hand behind the back? Or should we get XI and Putin’s pinky promise that they are really good guys and trust they would never develop any combat based AI? Lol come on.
Next stop: “Google abandon ‘no AI weapons use on American citizens’ ” stance