‘Godfather of AI’ says he’s ‘glad’ to be 77 because AI probably won’t take over the world in his lifetime | Hinton compared AI to raising a tiger cub that could turn deadly.

https://www.businessinsider.com/ai-godfather-geoffrey-hinton-superintelligence-risk-takeover-2025-4

Share.

12 Comments

  1. MetaKnowing on

    “Geoffrey Hinton, often referred to as the “godfather of AI,” warned that AI is advancing faster than experts once predicted — and that once it surpasses human intelligence, humanity may not be able to prevent it from taking control.

    “Things more intelligent than you are going to be able to manipulate you,” said Hinton, who was awarded the 2024 Nobel Prize in physics for his breakthroughs in machine learning.

    He compared humans advancing AI to raising a tiger. “It’s just such a cute tiger cub,” he said. “Now, unless you can be very sure that it’s not gonna wanna kill you when it’s grown up, you should worry.”

    Hinton estimated a “sort of 10 to 20% chance” that AI systems could eventually seize control, though he stressed that it’s impossible to predict exactly.

    One reason for his concern is the rise of AI agents, which don’t just answer questions but can perform tasks autonomously. “Things have got, if anything, scarier than they were before,” Hinton said.

    [Hinton resigned from Google in 2023](https://archive.is/o/XcARA/https://www.businessinsider.com/geoffrey-hinton-godfather-of-ai-quits-google-sounds-alarm-chatbots-2023-5). He said he left so he could speak freely about the dangers of AI development.”

  2. OnlyOneFeeder on

    Ah yes! I remember his statement about radiologists being obsolote in 5 years back in 2016…

  3. GUNxSPECTRE on

    I mean, that seems to be the mantra of Boomers.

    “We got to have fun at everybody else’s expense, but good luck with the clean up.”

  4. RedHeadedSicilian52 on

    Boomer blasé about leaving the world worse through his pursuit of short-term personal profit, news at 11.

  5. thenwetakeberlin on

    This guy has such a one-dimensional view of both intelligence and what constitutes a threat.

    Like, AI is already manipulating us largely against our own best interests a metric shit ton through control of our media diets. It doesn’t need to be like a cunning human to get there — a pretty “dumb” algorithm at scale can wreak havoc on “smarter” entities.

    Which hints at the much more pressing problem: we shouldn’t fear superintelligent AI (at least not yet) — we should fear the weaponization of AI against each other. And that’s already been happening a whole bunch. So is superintelligent AI a future threat? Sure, totally. Is it the ONLY or even most pressing way AI as we currently understand it is a threat? Uh, no. (Also, is superintelligent AI even likely to be superintelligent in the ways we are able to comprehend? TBD!)

    Meanwhile all this “lol, glad I’ll be dead soon because SOMEDAY Hal will be real and will be a super big, beautiful, intelligent threat!” is half hype, half personal grandstanding (“there is a 10-20% chance that I am basically god — you guys should all know that”) and completely ignores the much more real, pressing, and harder to correct issue of “lol, we are fucking up society by using AI to make decisions about what we consume and learn and how best to keep us from focusing elsewhere because of perverse economic incentives and the diffusion of responsibility through social hierarchies and the abstraction of technology all working against our basically ape brains optimized for figuring out which berries not to eat.”

    It’s like a dude saying “I’m glad I’m moving out because someday this house just might burn down as the result of my genius inventions deciding to start a fire!” while the upstairs is already on fire because somebody keeps selling tickets to a light show where he uses the machines to throw sparks. And on top of that, said dude keeps on distracting a bunch of us from doing anything about it by spinning tales of future threats. (Also, ironically the biggest threat to the emergence of a future superintelligence is that the already-burning fire brings the house down.)

    PS – Hinton is def a key figure in deep learning models for sure and deserves solid credit for it, but our myopic view of the long arc of scientific progress has a real bad habit of picking a spot on the arc and being like “everything happened right here where this one dude is standing.”

  6. nothing_pt on

    Do. Fuck us all since he old as fuck. Great for him. I’m also glad he’s 77

  7. Adventurous_Mix_8533 on

    When someone has a visitor telling their life is of future vital importance, claiming to be from the past, we then need to worry.

  8. Bagellllllleetr on

    Man, Boomers not beating the “fuck you, I got mine” allegations.

  9. JustDirection18 on

    Also Geoffrey Hinton “I have stage 4 cancer and my doctors given me a couple of months to live”

  10. dustofdeath on

    And so what if it does, it’s not like the billionaires and old politicians in charge are doing any better.