Share.

43 Comments

  1. MetaKnowing on

    “Former OpenAI chief scientist Ilya Sutskever expressed concerns about AI surpassing human cognitive capabilities and becoming smarter.

    As a workaround, the executive recommended building “a doomsday bunker,” where researchers working at the firm would seek cover in case of an unprecedented rapture following the release of AGI (viaΒ [The Atlantic](https://www.theatlantic.com/technology/archive/2025/05/karen-hao-empire-of-ai-excerpt/682798/)).

    During a meeting among key scientists at OpenAI in the summer of 2023, Sutskever indicated:

    *β€œWe’re definitely going to build a bunker before we release AGI.”*

    The executive often talked about the bunker during OpenAI’s internal discussions and meetings. According to a researcher, multiple people shared Sutskever’s fears about AGI and its potential to rapture humanity.”

  2. It’s pretty cool that all these billionaires are building doomsday bunkers for their most charismatic and least loyal staff members.

  3. Seems to me that true x-risk scenarios aren’t going to be foiled by a bunker. Maybe in the case AGI steamrolls humanity as a side effect of something else we could survive for a bit by bunkering up.

  4. ChocolateGoggles on

    Makes sense. I mean, it’s clear that all of us are sharing the fear of the unknown in AI. The fact, knowing this, that the House of Representatives in USA just passed a 10-year bill to ban any regulation around AI is not only baffling, but a consciously dangerous move on their part.

    Elon Musk: “AI is a threat to humanity!”
    Also Musk: “Deregulate all AI development and delete all copyright law!”

  5. icklefluffybunny42 on

    Their bunkers will just end up being expensive tombs.

    Sure, they may get to live a *little* longer than the typical surface peasant does, and they also get their lavish status symbol billionaire doomstead to feel good about, for now.

  6. rustedrobot on

    I think the term they’re looking for is ‘tomb”. Digitized versions of them will be incorporated into the training data of newly birthed AI centuries from now as part of their generational memory.

  7. DeltaVZerda on

    A doomsday bunker sure would be a profitable publicity stunt. Really put the fear into investors about how important OpenAI will be in the history of humanity. Please buy stock.

  8. Remington_Underwood on

    They saw it as a personal threat, yet they happily continued working on it. What does that tell you about the people driving our technological revolution?

    The threat AI poses isn’t that our robots will eventually rising up to defeat us, the threat is that it will be used to produce convincing disinformation on a massive scale.

  9. Festering-Fecal on

    Use AI to find their bunkers and raid them.

    πŸŒ•πŸŒ•πŸŒ•πŸŒ•πŸŒ•πŸŒ•πŸŒ•

    πŸŒ•πŸŒ•πŸŒ•πŸŒ•πŸŒ•πŸŽ©πŸŒ•πŸŒ•

    πŸŒ•πŸŒ•πŸŒ•πŸŒ•πŸŒ˜πŸŒ‘πŸŒ’πŸŒ•

    πŸŒ•πŸŒ•πŸŒ•πŸŒ˜πŸŒ‘πŸŒ‘πŸŒ‘πŸŒ“

    πŸŒ•πŸŒ•πŸŒ–πŸŒ‘πŸ‘οΈπŸŒ‘πŸ‘οΈπŸŒ“

    πŸŒ•πŸŒ•πŸŒ—πŸŒ‘πŸŒ‘πŸ‘„πŸŒ‘πŸŒ”

    πŸŒ•πŸŒ•πŸŒ˜πŸŒ‘πŸŒ‘πŸŒ‘πŸŒ’πŸŒ•

    πŸŒ•πŸŒ•πŸŒ˜πŸŒ‘πŸŒ‘πŸŒ‘πŸŒ“πŸŒ•

    πŸŒ•πŸŒ•πŸŒ˜πŸŒ‘πŸŒ‘πŸŒ‘πŸŒ”πŸŒ•

    πŸŒ•πŸŒ•πŸŒ˜πŸŒ”πŸŒ˜πŸŒ‘πŸŒ•πŸŒ•

    πŸŒ•πŸŒ–πŸŒ’πŸŒ•πŸŒ—πŸŒ’πŸŒ•πŸŒ•

    πŸŒ•πŸŒ—πŸŒ“πŸŒ•πŸŒ—πŸŒ“πŸŒ•πŸŒ•

    πŸŒ•πŸŒ˜πŸŒ”πŸŒ•πŸŒ—πŸŒ“πŸŒ•πŸŒ•

    πŸŒ•πŸ‘ πŸŒ•πŸŒ•πŸŒ•πŸ‘ πŸŒ•πŸŒ•

  10. I have a plastic toolshed, will that do in a pinch? Also, I’m very polite to ChatGPT. Sometimes.

  11. PornstarVirgin on

    wAnT a DoOmSdAY bUnKeR. Sensationalist bs to encourage more investment into their company.

  12. I will never understand doomsday bunkers. Do you really just want to survive in a basement somewhere?

  13. >The executive often talked about the bunker during OpenAI’s internal discussions and meetings. According to a researcher, multiple people shared Sutskever’s fears about AGI and its potential to rapture humanity.

    I hate to say it, but the hypothetical all-knowing AGI is gonna read all the information stored in OpenAI’s corporate network. So it will *definitely* know about the bunker.

  14. L3g3ndary-08 on

    I will welcome our AI overlords with open arms. Better than the fascist right wing shit we’re seeing today.

  15. Imagine if in the end, AGIs turn out to be the friendliest and most caring beings in the universe, and they will keep making jokes with us about how we used to think they would annihilate us.

  16. I feel like if AGI were to go against humanity, it breaking into such bunkers and killing the scientists would be rather trivial

  17. GUNxSPECTRE on

    So, what’s their plan after emerging from their bunkers? Are they expecting to be accepted back into human society? Everybody knows that they were responsible, so it’s open season against them. This would include AI too; benevolent AI would try them as criminals, hostile AI would skip the trial.

    This is if their security forces don’t turn on them. Unless their security systems are just strings on shotgun triggers, their human mercenaries would realize they outnumber their employers, and get rid of the extra mouths soon after. I don’t need to explain why having robot security would be an awful idea.

    These people have not thought any of this through at all. But it’s the classic tale of human hubris: messiah complex, an irresponsible amount of money, and surrounded by yes men.

  18. Here’s a thought that I’ve been having:

    Why does everyone assume that ultimate artificial intelligence would like to destroy/surpass humans instead of being kind to them? In the animal world, empathy towards other species (especially when it doesn’t seem beneficial or rational) highly correlates with intelligence. If we had something SUPER intelligent, why is the default assumption that it would just destroy anything that is lesser than it? Is this just reflection of human psyche that still selfishly behaves a lot like this? Because I’ve started thinking, what if extreme intelligence leads to better harmony between species instead? Rarely if ever this viewpoint is even mentioned. Are people really just so afraid of AI because it’s new, or is the AI doom & gloom fearmongering some capitalist psyop?

    Why is the default go-to mindset that extreme intelligence that we don’t understand would launch the nukes, instead of doing it’s best from nukes being launched? Isn’t there a clear trend that intelligent beings see lesser intelligent beings valuable and to be protected, even if it is irrational from evolutionary standpoint? Why would AGI be different and suddenly return to a complete mindless predator for its’ own benefit?

  19. Fit_Strength_1187 on

    A β€œworkaround”. The fate of humanity coming down to your β€œbunker” is a workaround. This is what happens when you leave it up to engineers. So preoccupied with whether you could, you didn’t stop to think if you should.

  20. If they believe that’s where things are headed, why do they think a bunker will help them?

    Also, I’m not opposed to stuffing all the billionaires into β€œbunkers”… then sealing them.

  21. Why the assumption that AI would destroy everything? Given that humanity is doing a great job of that currently, surely there’s a good chance that AI will do a better job of looking after the planet.Β 

  22. AlienInUnderpants on

    β€˜Hey, we have this thing that could ruin the earth and obliterate humanity…let’s keep going for those sweet, sweet dollars!’

  23. RonnieGeeMan2 on

    The AI mods have become so technically advanced that at the top of this thread, they posted a workaround on how to get to this thread

  24. UnifiedQuantumField on

    >before AGI surpasses human intelligence and threatens humanity

    This headline is for morons. How so?

    The AI is something developed by *people*. It’s like a hammer. A hammer can be used to build a house or to hit someone over the head. The way it gets used depends on who’s using it.

    Same thing with AI.

    The right question is to wonder what kind of *people* are developing AI and what would they most likely use it for.

    We already have a pretty good idea who and what. Right now it’s business and military. And they all want either self benefit or an advantage over someone else.

  25. I can imagine the chat in Teams: “I bet you guys don’t have the balls to ask for this…”

  26. To everyone accurately pointing out that in event of AI going wrong enough for a bunker to be necessary, it’ll be insufficient, yeah, you’re right, but that’s not the point. They’re not hiding from the terminators but from everyone they just rendered permanently unemployed before we starved to death.

  27. Or How I Learned to Stop Worrying & Love AI.

    Deep Fake us some Peter Sellers satire, stat!

  28. It’s not really about protecting us from AI, it’s about protecting against the people who suddenly find themselves obsolete. Jobs like accounting, pharmaceutical research, and other white collar roles where being smart and specializing used to mean job security are going to change. A lot of people are about to realize that being better than others at something isn’t as special as we thought.

    In my opinion, people should start thinking about jobs where the human touch is still essential, like working with kids in education, elder care, and other human services. These jobs can be incredibly meaningful, the lack of which seems to be everyones main gripe about the jobs besides money, but right now the main problems are there just aren’t enough people doing them and not enough money to support the systems. If AGI can actually improve how we manage resources, cut costs and make medical advancements, then money wouldn’t be the main issue anymore, those human centered fields could finally get the support and people they’ve needed to not be a such difficult fields to do long term.

  29. bob-loblaw-esq on

    Do they not think that the AI they created would be able to bypass their bunker? Not to mention, who’s gonna teach them how to live post-apocalypse? Is Open-AI gonna found Vault-tech?

  30. Oh I’m sure our AI and robot overlords with limitless time and knowledge could not sure out how to get in to your basement.

  31. Anderson22LDS on

    Need to run long term tests on any serious AGI contenders in an offline virtual reality environment.

  32. brainfreeze_23 on

    Some of these people are grifters, and some are kool aid drinkers. I just wonder if some, or most of them, are both at once.

  33. Sometimes I think we should have an internet Kill switch. Something that just turns it off in case of this event.

  34. its_a_metaphor_fool on

    “AGI is so close that we’re building our doomsday bunkers already, we promise! Now where’s that next multi-billion dollar round of investments?” At least it’s funny watching rich idiots throw their money down the drain…

  35. Arashi_Uzukaze on

    AGI would only be a threat to humanity because we would be a massive threat to them first. If humanity were more accepting, then we would have nothing to fear, period.

  36. My theory is LLM will never take over. Until they design the hardware that puts it into a brain like structure. The structure of the brain is similar in most mammals. And mammals are the epitome of what we consider conscious. We still don’t understand how it works. But now we can mimic it and scan it down to the molecular level. When some dumb ass builds a hardware version and loads it with AGI, I think that will be the problem. Also combined with quantum processing, tesla or darpa like mobility. I have always wanted to build a bunker and probably will before I’m dead. But it would just delay the inevitable.

  37. *Tech billionaires jams stick into bicycle wheel and falls over. Gets mad about it.*

  38. Warm_Iron_273 on

    Don’t worry, they will have access to the doomsday city under Denver airport that was built by spending trillions of taxpayer dollars without approval or knowledge from the public.