Nobel Prize winner claims former OpenAI Chief Scientist fired Sam Altman because he “is much less concerned with AI safety than profits” — and suggests superintelligence might be on the horizon: “We have maybe 4 years left” before human extinction

https://www.windowscentral.com/software-apps/nobel-prize-winner-claims-former-openai-chief-scientist-fired-sam-altman-because-he-is-much-less-concerned-with-ai-safety-than-profits-and-suggests-superintelligence-might-be-on-the-horizon-we-have-maybe-4-years-left-before-human-extinction

Share.

10 Comments

  1. “Scientists John J. Hopfield and Geoffrey E. Hinton recently received the Nobel Prize in Physics for their discoveries that helped fuel the development of artificial neural networks.

    Former OpenAI Chief Scientist Ilya Sutskever was Geoffrey E. Hinton’s student. Hilton branded Sutskever as a ‘clever’ student (he even admitted that the scientist was more ‘clever’ than him) and made things work.

    Hilton also used the opportunity to criticize the ChatGPT maker’s trajectory under Sam Altman’s leadership. He further claimed Altman is much less concerned with AI safety than with profits.

    Hilton predicts we’re [on the verge of hitting superintelligence, which could lead to human extinction](https://www.reddit.com/r/OpenAI/comments/1fzpysu/stuart_russell_said_hinton_is_tidying_up_his/).”

  2. Doubt, but if true I welcome our SuperAI overlords.

    This current meta with billionaires and economy in general can fuck right off.

  3. at this point we might be better off as Pets to AI Overlords, than as Slaves to Billionaires.

    at least people take care of pets. lol

    this is all sarcasm relax.

  4. I’m so sick of this AI fear mongering.

    If you think the current development of LLMs is barreling us towards extinction, then I ask you: HOW? Stop making inflammatory baseless claims and actually suggest the scenario leading us to doom. Otherwise go away. Idc if you have a Nobel in Physics.

  5. CalligrapherPlane731 on

    There is nothing worse than an old guy who’s a true believer in something new.

  6. Constant-Lychee9816 on

    It’s not that a Nobel Prize winner said that, it’s that he was chosen for the Nobel in part to spread the warnings he was making and amplify this message. International community is pushing hard for regulation and limitation of AI and some fear mongering comes in handy

  7. Quick-Albatross-9204 on

    We have maybe 4 years until it becomes unpredictable, extinction is only one possibility

  8. I think many don’t understand that extinction is still on the table.

    we are at the end of this cycle and we were very successful in converting energy to entropy.

    AI is just next step in that.

    We are function of an egg, when AI becomes it won’t need us anymore and we won’t have anything to go about but self-destruction and eventual termination.

    All resources which a primitive lifeform can acquire are depleted. In fact we are on the brink of stripping further. Our environmental resources are already beyond the 2036 mark of no return. We have maybe 100 years of clean air on the planet…

    So in all ways – AI is the only way to go.

  9. Its ridiculous to claim the current AI as we know it will lead to our demise. LLMs are sophisticated probability calculation models that make predictions what most likely will be the right answer based on the data you trained it on. How will this kill us???

  10. LMFAO how fucking gullible do you have to believe this shit. They’re trying to hype their product desperately to get money infused as the big VCs are starting to lose patience. Even extremely basic intelligence is nowhere close let alone ‘super intelligence’