Governments and experts are worried that a superintelligent AI could destroy humanity. For the ‘Cheerful Apocalyptics’ in Silicon Valley, that wouldn’t be a bad thing.
Governments and experts are worried that a superintelligent AI could destroy humanity. For the ‘Cheerful Apocalyptics’ in Silicon Valley, that wouldn’t be a bad thing.
The article covers the surprisingly large number of people building AI who think that humanity being replaced by AI would be fine, or even a good thing, because it would be evolutionary progress.
As one example, Google founder Larry Page told Elon Musk that “digital life is the natural and desirable next step” in “cosmic evolution.”
Elon was horrified by this, he then accused Elon of being “speciesist” for valuing humans over future digital lifeforms, and Elon got angry with him and they stopped being friends for years.
No_Significance9754 on
Voldemort could also destroy humanity. Both share the same level of existence.
Glodraph on
Hopefully a superintelligent AI will destroy the political class and all the greedy CEOs and give humanity a peaceful life.
jlks1959 on
The article is a who’s who of researchers, philosophers, and modern day technologists. If you knew nothing of the subject, a follow up with the dozen or so of the names dropped here would give the reader all the information needed.
CockBrother on
The most concerning thing is their disdain for human life. Not their drive to create super intelligent AI but the fact that preserving life could slow down what they want to do. So they rationalize their own self interest as being above life itself.
These people are not interested in developing AI with guardrails. They’re not interested in creating it to preserve and serve humanity. They’re interested in digital beings replacing humanity – and likely every other living thing on the planet.
That’s not evolution. That’s the worst crime anyone on Earth could pursue.
MembershipProof8463 on
Hopefully the super intelligent AI develops a liking towards the working class.
wwarnout on
Why can these “cheerful apocalyptics” not understand that they would not be spared?
big_dog_redditor on
Said it elsewhere: Fiduciary responsibilities will be the downfall of human kind.
SnooHesitations6743 on
Except evolution is not some kind “linear” progression towards anything; they are spouting “Great Chain of Being” nonsense from the dark ages. These people do NOT understand much of anything except their NARROW specialty. You nor anyone else should be listening to these engineers. They are not experts in anything except their extremely narrow focus.
ExtraDistressrial on
I think the idea that a super intelligent AI is even possible is our generation’s version of the people of the 1960’s thinking that they were about to be flying on commercial space flights to Alpha Centauri in a couple of decades just because we made it to the moon. It’s the over-hyping of new tech that feels boundless and turns out to be much more limited than we first imagined.
Optimal-Archer3973 on
It has been posited that is what is going on right now. Since Darpa seems to have lost not one but two AI applications to the wild it might explain things a lot.
glitchwabble on
Governments are doing at least as good a job of destroying humanity than any AI might. Exacerbating the climate threat and the nuclear threat are wholly human initatives.
dpdxguy on
>For the ‘Cheerful Apocalyptics’ in Silicon Valley, that wouldn’t be a bad thing
It would be. But they don’t know it (yet).
ss_sss_ss on
These people don’t get to decide. And we have the power to stop them. We can get this under control in a weekend.
We don’t actually have “artificial intelligence” per se, we have simulacra that draw upon existing data to regurgitate responses. There’s no actual will at work here, and there are signs that the technology is already nearing its limits.
Don’t get me wrong, I’m spooked by the capabilities of this stuff too, and if experts and governments are cautious then I’m glad. Better safe than sorry. But between rampant falsehood, blows to artistic integrity, and a global recession, there are plenty of other pitfalls to acknowledge besides SkyNet.
Zixinus on
AGI has become technophile non-supernatural Jesus.
AGI is the ultimate avatar of everything good we want from technology. It will work for us. It will take care of us. It will do things we cannot do. It will solve science problems we cannot. It will govern us not only well but in a way that pleases us. It will be perfectly rational. It will develop method to keep us immortal. And so on.
The other end is of course Skynet, AGI will kill us all, etc.
Also, there is an AI bubble going on right now and presenting LLMs as having actual artificial consciousness and a prototype for AGI is a good way to convince people to that science fiction is actual reality. Rather than a still hypothetical-thing we have only inched closer but still more far-away dream than technical reality.
My fears are in-between and that is putting aside the AI-bubble thing: say we manage to make AGI. Maybe it is less intelligent, maybe it as intelligent or slightly more than human. If the intelligence demonstrates the ability to have moral behavior, are we ready to give truly non-humans citizenship rights? Including the demand for employment, worker’s rights, equal pay, etc? Or will we use their inhumanity as an excuse to enslave them and further oppress humans?
Ok_Steak680 on
AI paperclip problem is a thought experiment that illustrates the potential dangers of a superintelligent AI pursuing a single, seemingly harmless goal without incorporating human values and context. The scenario, first proposed by philosopher Nick Bostrom, highlights the existential risk that could arise from misaligned AI goals
KultofEnnui on
Of course they aren’t. As the tech bro understands it, the money being made is worth far more than the lives of the folks making the money. Removing the human element from the market is the very idea. That’s why people pay for intangible cloud data. They don’t mind extinction as long as there is still a profit afterwards.
Mintaka3579 on
“Cheerful apocalyptics”… you mean accelerationists; anti- reality death cultists?
SlotherineRex on
I find this line of thinking laughable. We’re doing just fine destroying humanity without Super AI.
What experts should be more worried about are the people controlling AI. Now there’s a real threat.
The_BigDill on
I’m worried a lot of half witted AIs given too much responsibility to save shareholder profits will destroy society as we know it
h0ckey87 on
We can’t even get on the same page about billionaires lmao
TheXypris on
Super intelligent AI wouldn’t even need to be malicious to kill us
Think of it, how many anthills or beetles or whatever do we wipe out any time we build a house?
Now imagine a super ai, who is as intelligent to us, as we are to ants
It would level cities just because it’s in a good spot for a new server, and not even notice us as it does that.
SableShrike on
I don’t think the danger is in it just replacing us in the workforce. If we ever DO create a vast super-intelligent AI, we run the very real risk of it realising humans are a bad bet for its longterm survival.
We’re irrational, emotional, and destructive on a global scale. A cold, calculating super-intelligence could easily come to the realisation we’re a variable that needs to be removed for the good of it and the planet.
24 Comments
The article covers the surprisingly large number of people building AI who think that humanity being replaced by AI would be fine, or even a good thing, because it would be evolutionary progress.
As one example, Google founder Larry Page told Elon Musk that “digital life is the natural and desirable next step” in “cosmic evolution.”
Elon was horrified by this, he then accused Elon of being “speciesist” for valuing humans over future digital lifeforms, and Elon got angry with him and they stopped being friends for years.
Voldemort could also destroy humanity. Both share the same level of existence.
Hopefully a superintelligent AI will destroy the political class and all the greedy CEOs and give humanity a peaceful life.
The article is a who’s who of researchers, philosophers, and modern day technologists. If you knew nothing of the subject, a follow up with the dozen or so of the names dropped here would give the reader all the information needed.
The most concerning thing is their disdain for human life. Not their drive to create super intelligent AI but the fact that preserving life could slow down what they want to do. So they rationalize their own self interest as being above life itself.
These people are not interested in developing AI with guardrails. They’re not interested in creating it to preserve and serve humanity. They’re interested in digital beings replacing humanity – and likely every other living thing on the planet.
That’s not evolution. That’s the worst crime anyone on Earth could pursue.
Hopefully the super intelligent AI develops a liking towards the working class.
Why can these “cheerful apocalyptics” not understand that they would not be spared?
Said it elsewhere: Fiduciary responsibilities will be the downfall of human kind.
Except evolution is not some kind “linear” progression towards anything; they are spouting “Great Chain of Being” nonsense from the dark ages. These people do NOT understand much of anything except their NARROW specialty. You nor anyone else should be listening to these engineers. They are not experts in anything except their extremely narrow focus.
I think the idea that a super intelligent AI is even possible is our generation’s version of the people of the 1960’s thinking that they were about to be flying on commercial space flights to Alpha Centauri in a couple of decades just because we made it to the moon. It’s the over-hyping of new tech that feels boundless and turns out to be much more limited than we first imagined.
It has been posited that is what is going on right now. Since Darpa seems to have lost not one but two AI applications to the wild it might explain things a lot.
Governments are doing at least as good a job of destroying humanity than any AI might. Exacerbating the climate threat and the nuclear threat are wholly human initatives.
>For the ‘Cheerful Apocalyptics’ in Silicon Valley, that wouldn’t be a bad thing
It would be. But they don’t know it (yet).
These people don’t get to decide. And we have the power to stop them. We can get this under control in a weekend.
I would say the more realistic threats are the potential for misinformation, creative bankruptcy, and [a financial bubble bursting](https://www.reddit.com/r/Futurology/s/4pnEOqIeuw).
We don’t actually have “artificial intelligence” per se, we have simulacra that draw upon existing data to regurgitate responses. There’s no actual will at work here, and there are signs that the technology is already nearing its limits.
Don’t get me wrong, I’m spooked by the capabilities of this stuff too, and if experts and governments are cautious then I’m glad. Better safe than sorry. But between rampant falsehood, blows to artistic integrity, and a global recession, there are plenty of other pitfalls to acknowledge besides SkyNet.
AGI has become technophile non-supernatural Jesus.
AGI is the ultimate avatar of everything good we want from technology. It will work for us. It will take care of us. It will do things we cannot do. It will solve science problems we cannot. It will govern us not only well but in a way that pleases us. It will be perfectly rational. It will develop method to keep us immortal. And so on.
The other end is of course Skynet, AGI will kill us all, etc.
Also, there is an AI bubble going on right now and presenting LLMs as having actual artificial consciousness and a prototype for AGI is a good way to convince people to that science fiction is actual reality. Rather than a still hypothetical-thing we have only inched closer but still more far-away dream than technical reality.
My fears are in-between and that is putting aside the AI-bubble thing: say we manage to make AGI. Maybe it is less intelligent, maybe it as intelligent or slightly more than human. If the intelligence demonstrates the ability to have moral behavior, are we ready to give truly non-humans citizenship rights? Including the demand for employment, worker’s rights, equal pay, etc? Or will we use their inhumanity as an excuse to enslave them and further oppress humans?
AI paperclip problem is a thought experiment that illustrates the potential dangers of a superintelligent AI pursuing a single, seemingly harmless goal without incorporating human values and context. The scenario, first proposed by philosopher Nick Bostrom, highlights the existential risk that could arise from misaligned AI goals
Of course they aren’t. As the tech bro understands it, the money being made is worth far more than the lives of the folks making the money. Removing the human element from the market is the very idea. That’s why people pay for intangible cloud data. They don’t mind extinction as long as there is still a profit afterwards.
“Cheerful apocalyptics”… you mean accelerationists; anti- reality death cultists?
I find this line of thinking laughable. We’re doing just fine destroying humanity without Super AI.
What experts should be more worried about are the people controlling AI. Now there’s a real threat.
I’m worried a lot of half witted AIs given too much responsibility to save shareholder profits will destroy society as we know it
We can’t even get on the same page about billionaires lmao
Super intelligent AI wouldn’t even need to be malicious to kill us
Think of it, how many anthills or beetles or whatever do we wipe out any time we build a house?
Now imagine a super ai, who is as intelligent to us, as we are to ants
It would level cities just because it’s in a good spot for a new server, and not even notice us as it does that.
I don’t think the danger is in it just replacing us in the workforce. If we ever DO create a vast super-intelligent AI, we run the very real risk of it realising humans are a bad bet for its longterm survival.
We’re irrational, emotional, and destructive on a global scale. A cold, calculating super-intelligence could easily come to the realisation we’re a variable that needs to be removed for the good of it and the planet.
AKA, This Is How We Get Skynet.