When people argue that AGI is inevitable, what they’re really saying is that the popular will shouldn’t matter. The boosters see the masses as provincial neo-Luddites who don’t know what’s good for them.

https://www.theguardian.com/commentisfree/ng-interactive/2025/jul/21/human-level-artificial-intelligence

Share.

23 Comments

  1. CrucioIsMade4Muggles on

    That’s the most horseshit in a headline I’ve seen in a while, which is saying something in this political climate.

  2. Submission statement: “For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature.

    But it’s not.

    Technology is the product of deliberate human choices, motivated by myriad powerful forces.

    We have the agency to shape those forces, and history shows that we’ve done it before.”

    ———

    “Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures).

    Given all this, it’s natural to ask: **should we really try to build a technology that may kill us all if it goes wrong?**

    Perhaps the most common reply says: AGI is inevitable. It’s just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing [called]([https://en.wikipedia.org/wiki/I._J._Good#:~:text=Thus the first ultraintelligent machine,to keep it under control.)](https://en.wikipedia.org/wiki/I._J._Good#:~:text=Thus%20the%20first%20ultraintelligent%20machine,to%20keep%20it%20under%20control.)) “the last invention that man need ever make”. Besides, the reasoning goes within AI labs, if we don’t, someone else will do it – less responsibly, of course.

    “A new ideology out of Silicon Valley, [effective accelerationism](https://en.wikipedia.org/wiki/Effective_accelerationism) (e/acc), [claims](https://effectiveacceleration.tech/) that AGI’s inevitability is a consequence of the second law of thermodynamics and that its engine is “technocapital”. The e/acc [manifesto](https://effectiveacceleration.tech/) asserts: “This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.”

    ———

    **“Instead, the message tends to be: AGI is imminent. Resistance is futile.**

    [But] if you think AGI is inevitable, why bother convincing anybody”

    “When people argue that AGI is inevitable, what they’re really saying is that the popular will shouldn’t matter. The boosters see the masses as provincial neo-Luddites who don’t know what’s good for them.

    That’s why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion.”

  3. It’s a regurgitation machine. It’s nowhere near AGI. The only way it gets to agi is a fundamental rethink of how the thing works. But all they’re doing is adding data centres. That’s not gonna bring agi.

  4. the-war-on-drunks on

    There’s a huge vocal minority of AI haters. Most people who use AI aren’t out there defending it.

  5. When I say it’s inevitable, I mean, I am not going back. If every AI company went bankrupt tomorrow, I’d build my own rig and run an open source model.

    I want better models and more places they can work, but if that all stopped, we still have models with enormous potensial for optimizing by the open source community.

  6. that was certainly a bunch of words. Now im not sure if im an idiot for not knowing them.

  7. I don’t find the article convincing. Human cloning and nuclear weapons were never as profitable as AI is right now. Green energy initiatives and nuclear power bans were also only possible as far as economic realities allow. AI may be in a bubble but it will still make a lot of money. The continuing insane AI rush is inevitable.

    As for AGI, the kind that can significantly accelerate actual AI progress by helping with AI innovations, I’m not sure that it’s possible. We already see diminishing returns compared to a couple of years ago. Maybe it will be like approaching light speed in relativity.

  8. It’s inevitable in the sense of the Pandora box metaphor.

    Once people saw what nuclear bombs could do, everyone started attempting to make it and eventually we decided to control their production.

    US-China-EU will need to sign deals on how to regulate AI, and stopping research in one country will not slowdown the progress

  9. Everyone likes to identify themselves with the “popular will”. AI tools have massive user bases, most people welcome and embrace it, like it or not.

  10. Why would saying that [development] is inevitable be the same as saying people’s wishes about it shouldn’t matter? It’s just saying they don’t matter, whether they should or not, that’s all.

  11. PsychologicalTwo1784 on

    Surely when AGI is achieved, we won’t know for a while as the Intelligence/ singularity will need some time working behind the scenes to make sure it can’t get switched off… Maybe it’s inevitable as they won’t really know it’s achievable (and how it’ll behave) until it’s achieved….

  12. im_thatoneguy on

    Anything technologically possible and financially cost effective will be done no matter the morality or popular will.

    I’m not saying popular will “shouldn’t matter” I’m saying it “doesn’t matter” if we look back at history.

    Popular will said a whole lot of things shouldn’t have happened that did happen.

    AGI is possible. It’s possible because we aren’t made of magic. How will it be created I don’t know. But at some point someone will make a process that’s as cheap as growing a gallon sized human head but at least 1% smarter. And the only way we’ll be able to stop it is insanely pervasive and invasive surveillance.

  13. The correct concept is another layer built on top of the internet and data itself from human knowledge.

    The framing is dramatization in the news cycle and misleading otherwise.

  14. I think the condition the world is currently in demonstrates the shortcomings of man’s ability to govern itself and the world.

    Hard to know how things will play out but there is definitely a possible future where AGI is created, it is benevolent, and it governs the world exponentially better than humans ever could, leading to a Star Trek like utopia

    There is also a possible future where it doesn’t give a shit about life on the planet and proceeds to eradicate us like a terminator/matrix dystopia, so… 🤷‍♂️

    As a person who understands game theory, I absolutely see its creation as inevitable

  15. Public opinion doesn’t understand what AGI even means. Most people treat LLM’s like they’re AGI’s already.

  16. Is it – I’m sure greedy billionaires wish it were so, replacing millions of workers with salaries and healthcare and opinions with obedient robots and automated systems in server farms silently generating GDP.

    What we’re finding out rapidly is that instead of geometric successes and compounding demand we see small gains and limited profitability , of course this is one of those first-principles problems, and so consider Shannon’s Rule and we should ask perhaps most of all, how much of this is signal and how much is noise. Worse , is that these systems have problems being validated or verified in real-world circumstances.

    So if at the end of the day AI posts 5% improvement in corporate efficiency, you get better bang for your buck training existing staff on Excel or using Outlook more effectively.

  17. inifinite_stick on

    The title itself contains a rather glaring straw man. Nuclear tech has caused horrific accidents in the past, and yet reddit constantly boosts it. This tech hasn’t even had a chance to exist outside of concept yet.

  18. I usually see the inevitability as a function of compute and multiple discovery. Essentially when you give researchers a million times the compute they’ll rapidly iterate and come to the same conclusions as others. The only known way to delay this is to restrict compute globally, which is impossible. If you did manage that though, then as soon as compute is allowed to spring forwards to current nanofabrication capability you’ll have all the problems and harm, immediately. So the best harm reduction is to educate and ensure governments are proactive or at least somewhat reactive to discoveries and their impact.

    This article also supposes that AGI is a distinct research area and that normal research can continue without stumbling onto advanced AI architectures. This is highly unlikely as embodied AI for example will use multimodal models with neuromorphic sensors and continual learning. You’re basically looking at a minefield of approaches that could all lead to AGI. This is true in other fields though as well that have difficult problems and the researchers attempt to create a model that reasons and optimizes optimally.

    On the positive side, AGI itself is a gradual process. It’s a culmination if many feedback loops. Building foundries to make new chips and creating fusion power to run the first AGIs as they self-optimise should give us a bit of time to plan.

  19. I repeat for the millionth time that rebranding the Luddites as “backwards yokels afraid of technology” was one of capitalism’s biggest victories. The Luddites saw their bosses using the earnings from their labor to buy machines that increased productivity and keep 100% of the benefit for themselves. The Luddites just wanted their fair share, either fewer hours or more pay since they could be more productive in the same time. And the owners laughed all the way to the bank, so the workers smashed the machines that their labor had bought.

    The Luddites were right.

  20. Anyone who understands neural networks knows it’s going to be a long time before AGI will be achieved. Power scaling is a bitch.