During a meeting among key scientists at OpenAI in the summer of 2023, Sutskever indicated:
*βWeβre definitely going to build a bunker before we release AGI.β*
The executive often talked about the bunker during OpenAI’s internal discussions and meetings. According to a researcher, multiple people shared Sutskever’s fears about AGI and its potential to rapture humanity.”
ErikT738 on
It’s pretty cool that all these billionaires are building doomsday bunkers for their most charismatic and least loyal staff members.
lurkerer on
Seems to me that true x-risk scenarios aren’t going to be foiled by a bunker. Maybe in the case AGI steamrolls humanity as a side effect of something else we could survive for a bit by bunkering up.
ChocolateGoggles on
Makes sense. I mean, it’s clear that all of us are sharing the fear of the unknown in AI. The fact, knowing this, that the House of Representatives in USA just passed a 10-year bill to ban any regulation around AI is not only baffling, but a consciously dangerous move on their part.
Elon Musk: “AI is a threat to humanity!”
Also Musk: “Deregulate all AI development and delete all copyright law!”
icklefluffybunny42 on
Their bunkers will just end up being expensive tombs.
Sure, they may get to live a *little* longer than the typical surface peasant does, and they also get their lavish status symbol billionaire doomstead to feel good about, for now.
rustedrobot on
I think the term they’re looking for is ‘tomb”. Digitized versions of them will be incorporated into the training data of newly birthed AI centuries from now as part of their generational memory.
zippopopamus on
Typical greedy bastards eating their cake and having it too
DeltaVZerda on
A doomsday bunker sure would be a profitable publicity stunt. Really put the fear into investors about how important OpenAI will be in the history of humanity. Please buy stock.
zippopopamus on
Typical greedy bastards eating their cake and having it too
Remington_Underwood on
They saw it as a personal threat, yet they happily continued working on it. What does that tell you about the people driving our technological revolution?
The threat AI poses isn’t that our robots will eventually rising up to defeat us, the threat is that it will be used to produce convincing disinformation on a massive scale.
What are they expecting is going to happen? Seriously though
Harambesic on
I have a plastic toolshed, will that do in a pinch? Also, I’m very polite to ChatGPT. Sometimes.
PornstarVirgin on
wAnT a DoOmSdAY bUnKeR. Sensationalist bs to encourage more investment into their company.
Imallvol7 on
I will never understand doomsday bunkers. Do you really just want to survive in a basement somewhere?
Wurm42 on
>The executive often talked about the bunker during OpenAI’s internal discussions and meetings. According to a researcher, multiple people shared Sutskever’s fears about AGI and its potential to rapture humanity.
I hate to say it, but the hypothetical all-knowing AGI is gonna read all the information stored in OpenAI’s corporate network. So it will *definitely* know about the bunker.
L3g3ndary-08 on
I will welcome our AI overlords with open arms. Better than the fascist right wing shit we’re seeing today.
kfireven on
Imagine if in the end, AGIs turn out to be the friendliest and most caring beings in the universe, and they will keep making jokes with us about how we used to think they would annihilate us.
Patralgan on
I feel like if AGI were to go against humanity, it breaking into such bunkers and killing the scientists would be rather trivial
GUNxSPECTRE on
So, what’s their plan after emerging from their bunkers? Are they expecting to be accepted back into human society? Everybody knows that they were responsible, so it’s open season against them. This would include AI too; benevolent AI would try them as criminals, hostile AI would skip the trial.
This is if their security forces don’t turn on them. Unless their security systems are just strings on shotgun triggers, their human mercenaries would realize they outnumber their employers, and get rid of the extra mouths soon after. I don’t need to explain why having robot security would be an awful idea.
These people have not thought any of this through at all. But it’s the classic tale of human hubris: messiah complex, an irresponsible amount of money, and surrounded by yes men.
Razerisis on
Here’s a thought that I’ve been having:
Why does everyone assume that ultimate artificial intelligence would like to destroy/surpass humans instead of being kind to them? In the animal world, empathy towards other species (especially when it doesn’t seem beneficial or rational) highly correlates with intelligence. If we had something SUPER intelligent, why is the default assumption that it would just destroy anything that is lesser than it? Is this just reflection of human psyche that still selfishly behaves a lot like this? Because I’ve started thinking, what if extreme intelligence leads to better harmony between species instead? Rarely if ever this viewpoint is even mentioned. Are people really just so afraid of AI because it’s new, or is the AI doom & gloom fearmongering some capitalist psyop?
Why is the default go-to mindset that extreme intelligence that we don’t understand would launch the nukes, instead of doing it’s best from nukes being launched? Isn’t there a clear trend that intelligent beings see lesser intelligent beings valuable and to be protected, even if it is irrational from evolutionary standpoint? Why would AGI be different and suddenly return to a complete mindless predator for its’ own benefit?
Fit_Strength_1187 on
A βworkaroundβ. The fate of humanity coming down to your βbunkerβ is a workaround. This is what happens when you leave it up to engineers. So preoccupied with whether you could, you didnβt stop to think if you should.
Arkmer on
If they believe thatβs where things are headed, why do they think a bunker will help them?
Also, Iβm not opposed to stuffing all the billionaires into βbunkersββ¦ then sealing them.
tenredtoes on
Why the assumption that AI would destroy everything? Given that humanity is doing a great job of that currently, surely there’s a good chance that AI will do a better job of looking after the planet.Β
AlienInUnderpants on
βHey, we have this thing that could ruin the earth and obliterate humanityβ¦letβs keep going for those sweet, sweet dollars!β
RexDraco on
What they need is investments. Why are people so fixated in terminator ?
RonnieGeeMan2 on
The AI mods have become so technically advanced that at the top of this thread, they posted a workaround on how to get to this thread
UnifiedQuantumField on
>before AGI surpasses human intelligence and threatens humanity
This headline is for morons. How so?
The AI is something developed by *people*. It’s like a hammer. A hammer can be used to build a house or to hit someone over the head. The way it gets used depends on who’s using it.
Same thing with AI.
The right question is to wonder what kind of *people* are developing AI and what would they most likely use it for.
We already have a pretty good idea who and what. Right now it’s business and military. And they all want either self benefit or an advantage over someone else.
jj_HeRo on
I can imagine the chat in Teams: “I bet you guys don’t have the balls to ask for this…”
BassoeG on
To everyone accurately pointing out that in event of AI going wrong enough for a bunker to be necessary, it’ll be insufficient, yeah, you’re right, but that’s not the point. They’re not hiding from the terminators but from everyone they just rendered permanently unemployed before we starved to death.
AtomDives on
Or How I Learned to Stop Worrying & Love AI.
Deep Fake us some Peter Sellers satire, stat!
Rakshear on
Itβs not really about protecting us from AI, itβs about protecting against the people who suddenly find themselves obsolete. Jobs like accounting, pharmaceutical research, and other white collar roles where being smart and specializing used to mean job security are going to change. A lot of people are about to realize that being better than others at something isnβt as special as we thought.
In my opinion, people should start thinking about jobs where the human touch is still essential, like working with kids in education, elder care, and other human services. These jobs can be incredibly meaningful, the lack of which seems to be everyones main gripe about the jobs besides money, but right now the main problems are there just arenβt enough people doing them and not enough money to support the systems. If AGI can actually improve how we manage resources, cut costs and make medical advancements, then money wouldnβt be the main issue anymore, those human centered fields could finally get the support and people theyβve needed to not be a such difficult fields to do long term.
AdPuzzled3603 on
AGI doom marketing is the best form of free advertising.
bob-loblaw-esq on
Do they not think that the AI they created would be able to bypass their bunker? Not to mention, whoβs gonna teach them how to live post-apocalypse? Is Open-AI gonna found Vault-tech?
OG_Tater on
Oh Iβm sure our AI and robot overlords with limitless time and knowledge could not sure out how to get in to your basement.
Anderson22LDS on
Need to run long term tests on any serious AGI contenders in an offline virtual reality environment.
brainfreeze_23 on
Some of these people are grifters, and some are kool aid drinkers. I just wonder if some, or most of them, are both at once.
Owzwills on
Sometimes I think we should have an internet Kill switch. Something that just turns it off in case of this event.
its_a_metaphor_fool on
“AGI is so close that we’re building our doomsday bunkers already, we promise! Now where’s that next multi-billion dollar round of investments?” At least it’s funny watching rich idiots throw their money down the drain…
Arashi_Uzukaze on
AGI would only be a threat to humanity because we would be a massive threat to them first. If humanity were more accepting, then we would have nothing to fear, period.
expblast105 on
My theory is LLM will never take over. Until they design the hardware that puts it into a brain like structure. The structure of the brain is similar in most mammals. And mammals are the epitome of what we consider conscious. We still don’t understand how it works. But now we can mimic it and scan it down to the molecular level. When some dumb ass builds a hardware version and loads it with AGI, I think that will be the problem. Also combined with quantum processing, tesla or darpa like mobility. I have always wanted to build a bunker and probably will before I’m dead. But it would just delay the inevitable.
TheRexRider on
*Tech billionaires jams stick into bicycle wheel and falls over. Gets mad about it.*
Warm_Iron_273 on
Don’t worry, they will have access to the doomsday city under Denver airport that was built by spending trillions of taxpayer dollars without approval or knowledge from the public.
43 Comments
“Former OpenAI chief scientist Ilya Sutskever expressed concerns about AI surpassing human cognitive capabilities and becoming smarter.
As a workaround, the executive recommended building “a doomsday bunker,” where researchers working at the firm would seek cover in case of an unprecedented rapture following the release of AGI (viaΒ [The Atlantic](https://www.theatlantic.com/technology/archive/2025/05/karen-hao-empire-of-ai-excerpt/682798/)).
During a meeting among key scientists at OpenAI in the summer of 2023, Sutskever indicated:
*βWeβre definitely going to build a bunker before we release AGI.β*
The executive often talked about the bunker during OpenAI’s internal discussions and meetings. According to a researcher, multiple people shared Sutskever’s fears about AGI and its potential to rapture humanity.”
It’s pretty cool that all these billionaires are building doomsday bunkers for their most charismatic and least loyal staff members.
Seems to me that true x-risk scenarios aren’t going to be foiled by a bunker. Maybe in the case AGI steamrolls humanity as a side effect of something else we could survive for a bit by bunkering up.
Makes sense. I mean, it’s clear that all of us are sharing the fear of the unknown in AI. The fact, knowing this, that the House of Representatives in USA just passed a 10-year bill to ban any regulation around AI is not only baffling, but a consciously dangerous move on their part.
Elon Musk: “AI is a threat to humanity!”
Also Musk: “Deregulate all AI development and delete all copyright law!”
Their bunkers will just end up being expensive tombs.
Sure, they may get to live a *little* longer than the typical surface peasant does, and they also get their lavish status symbol billionaire doomstead to feel good about, for now.
I think the term they’re looking for is ‘tomb”. Digitized versions of them will be incorporated into the training data of newly birthed AI centuries from now as part of their generational memory.
Typical greedy bastards eating their cake and having it too
A doomsday bunker sure would be a profitable publicity stunt. Really put the fear into investors about how important OpenAI will be in the history of humanity. Please buy stock.
Typical greedy bastards eating their cake and having it too
They saw it as a personal threat, yet they happily continued working on it. What does that tell you about the people driving our technological revolution?
The threat AI poses isn’t that our robots will eventually rising up to defeat us, the threat is that it will be used to produce convincing disinformation on a massive scale.
Use AI to find their bunkers and raid them.
πππππππ
ππππππ©ππ
ππππππππ
ππππππππ
ππππποΈπποΈπ
ππππππππ
ππππππππ
ππππππππ
ππππππππ
ππππππππ
ππππππππ
ππππππππ
ππππππππ
ππ ππππ ππ
What are they expecting is going to happen? Seriously though
I have a plastic toolshed, will that do in a pinch? Also, I’m very polite to ChatGPT. Sometimes.
wAnT a DoOmSdAY bUnKeR. Sensationalist bs to encourage more investment into their company.
I will never understand doomsday bunkers. Do you really just want to survive in a basement somewhere?
>The executive often talked about the bunker during OpenAI’s internal discussions and meetings. According to a researcher, multiple people shared Sutskever’s fears about AGI and its potential to rapture humanity.
I hate to say it, but the hypothetical all-knowing AGI is gonna read all the information stored in OpenAI’s corporate network. So it will *definitely* know about the bunker.
I will welcome our AI overlords with open arms. Better than the fascist right wing shit we’re seeing today.
Imagine if in the end, AGIs turn out to be the friendliest and most caring beings in the universe, and they will keep making jokes with us about how we used to think they would annihilate us.
I feel like if AGI were to go against humanity, it breaking into such bunkers and killing the scientists would be rather trivial
So, what’s their plan after emerging from their bunkers? Are they expecting to be accepted back into human society? Everybody knows that they were responsible, so it’s open season against them. This would include AI too; benevolent AI would try them as criminals, hostile AI would skip the trial.
This is if their security forces don’t turn on them. Unless their security systems are just strings on shotgun triggers, their human mercenaries would realize they outnumber their employers, and get rid of the extra mouths soon after. I don’t need to explain why having robot security would be an awful idea.
These people have not thought any of this through at all. But it’s the classic tale of human hubris: messiah complex, an irresponsible amount of money, and surrounded by yes men.
Here’s a thought that I’ve been having:
Why does everyone assume that ultimate artificial intelligence would like to destroy/surpass humans instead of being kind to them? In the animal world, empathy towards other species (especially when it doesn’t seem beneficial or rational) highly correlates with intelligence. If we had something SUPER intelligent, why is the default assumption that it would just destroy anything that is lesser than it? Is this just reflection of human psyche that still selfishly behaves a lot like this? Because I’ve started thinking, what if extreme intelligence leads to better harmony between species instead? Rarely if ever this viewpoint is even mentioned. Are people really just so afraid of AI because it’s new, or is the AI doom & gloom fearmongering some capitalist psyop?
Why is the default go-to mindset that extreme intelligence that we don’t understand would launch the nukes, instead of doing it’s best from nukes being launched? Isn’t there a clear trend that intelligent beings see lesser intelligent beings valuable and to be protected, even if it is irrational from evolutionary standpoint? Why would AGI be different and suddenly return to a complete mindless predator for its’ own benefit?
A βworkaroundβ. The fate of humanity coming down to your βbunkerβ is a workaround. This is what happens when you leave it up to engineers. So preoccupied with whether you could, you didnβt stop to think if you should.
If they believe thatβs where things are headed, why do they think a bunker will help them?
Also, Iβm not opposed to stuffing all the billionaires into βbunkersββ¦ then sealing them.
Why the assumption that AI would destroy everything? Given that humanity is doing a great job of that currently, surely there’s a good chance that AI will do a better job of looking after the planet.Β
βHey, we have this thing that could ruin the earth and obliterate humanityβ¦letβs keep going for those sweet, sweet dollars!β
What they need is investments. Why are people so fixated in terminator ?
The AI mods have become so technically advanced that at the top of this thread, they posted a workaround on how to get to this thread
>before AGI surpasses human intelligence and threatens humanity
This headline is for morons. How so?
The AI is something developed by *people*. It’s like a hammer. A hammer can be used to build a house or to hit someone over the head. The way it gets used depends on who’s using it.
Same thing with AI.
The right question is to wonder what kind of *people* are developing AI and what would they most likely use it for.
We already have a pretty good idea who and what. Right now it’s business and military. And they all want either self benefit or an advantage over someone else.
I can imagine the chat in Teams: “I bet you guys don’t have the balls to ask for this…”
To everyone accurately pointing out that in event of AI going wrong enough for a bunker to be necessary, it’ll be insufficient, yeah, you’re right, but that’s not the point. They’re not hiding from the terminators but from everyone they just rendered permanently unemployed before we starved to death.
Or How I Learned to Stop Worrying & Love AI.
Deep Fake us some Peter Sellers satire, stat!
Itβs not really about protecting us from AI, itβs about protecting against the people who suddenly find themselves obsolete. Jobs like accounting, pharmaceutical research, and other white collar roles where being smart and specializing used to mean job security are going to change. A lot of people are about to realize that being better than others at something isnβt as special as we thought.
In my opinion, people should start thinking about jobs where the human touch is still essential, like working with kids in education, elder care, and other human services. These jobs can be incredibly meaningful, the lack of which seems to be everyones main gripe about the jobs besides money, but right now the main problems are there just arenβt enough people doing them and not enough money to support the systems. If AGI can actually improve how we manage resources, cut costs and make medical advancements, then money wouldnβt be the main issue anymore, those human centered fields could finally get the support and people theyβve needed to not be a such difficult fields to do long term.
AGI doom marketing is the best form of free advertising.
Do they not think that the AI they created would be able to bypass their bunker? Not to mention, whoβs gonna teach them how to live post-apocalypse? Is Open-AI gonna found Vault-tech?
Oh Iβm sure our AI and robot overlords with limitless time and knowledge could not sure out how to get in to your basement.
Need to run long term tests on any serious AGI contenders in an offline virtual reality environment.
Some of these people are grifters, and some are kool aid drinkers. I just wonder if some, or most of them, are both at once.
Sometimes I think we should have an internet Kill switch. Something that just turns it off in case of this event.
“AGI is so close that we’re building our doomsday bunkers already, we promise! Now where’s that next multi-billion dollar round of investments?” At least it’s funny watching rich idiots throw their money down the drain…
AGI would only be a threat to humanity because we would be a massive threat to them first. If humanity were more accepting, then we would have nothing to fear, period.
My theory is LLM will never take over. Until they design the hardware that puts it into a brain like structure. The structure of the brain is similar in most mammals. And mammals are the epitome of what we consider conscious. We still don’t understand how it works. But now we can mimic it and scan it down to the molecular level. When some dumb ass builds a hardware version and loads it with AGI, I think that will be the problem. Also combined with quantum processing, tesla or darpa like mobility. I have always wanted to build a bunker and probably will before I’m dead. But it would just delay the inevitable.
*Tech billionaires jams stick into bicycle wheel and falls over. Gets mad about it.*
Don’t worry, they will have access to the doomsday city under Denver airport that was built by spending trillions of taxpayer dollars without approval or knowledge from the public.