On a recent podcast with Lex Fridman, Google CEO Sundar Pichai said, "I'm optimistic on the p(doom) scenarios, but … the underlying risk is actually pretty high."

Pichai argued that the higher it gets, the more likely that humanity will rally to prevent catastrophe. 

Google CEO says the risk of AI causing human extinction is "actually pretty high", but is an optimist because he thinks humanity will rally to prevent catastrophe
byu/katxwoods inFuturology

Share.

44 Comments

  1. Submission statement: what’s your p(doom)? (Probability that advanced AI will lead to catastrophes like human extinction)

    What do you think would lower it? What do you think would raise it?

    Do you trust humanity to rally enough and in time?

  2. Maybe we should just stop pursuing this line of research. Maybe we can find other avenues to explore.

    Why must we pursue AI? It’s spoken about as if it’s an inevitable and necessary conclusion but I don’t actually think it is. Perhaps humanity would benefit from a course correction.

  3. ZenithBlade101 on

    Google, the company behind Gemini, Deepmind, and Alphafold, is hyping up AI ?

    😮

  4. I’m optimistic that while I make all the money from this technology, someone else will come along and find a way to avoid extinction, so that my children will get to enjoy their riches!!” These are extraordinary levels of greed and cognitive dissonance – if I was a Google/Alphabet shareholder I would be wary of having such an irresponsible child run my company!

  5. Rev_LoveRevolver on

    Over a million Americans died because of COVID and to this day there are folks who think the whole thing never even happened. Sure, they’ll “rally” to prevent catastrophe. This guy may know computers but has he ever actually met a human?

  6. PensionNational249 on

    How, exactly, does Sundar believe that humanity will “rally” to prevent catastrophe if and when a malignant ASI is created?

    Cause I mean, it’s my understanding that’s once the ASI is made, that’s pretty much it, no take-backsies lol

  7. Agreeable_Service407 on

    You just need to convince humantity that IA is brown and we’ll take care of it.

  8. Just give him enough money and he will keep you safe. This is the story these guys are selling and the gullible are buying. If anything will cause extinction it’s natural stupidity.

  9. OpenImagination9 on

    Please, we couldn’t even get of our asses to vote against impending doom after being clearly warned.

    I just hope it’s quick.

  10. Just like we have for climate change, right? Not to mention that is a problem that is being exacerbated by the exorbitant energy usage of AI.

  11. Chao_Zu_Kang on

    Kinda delusional to think that humanity would “rally to prevent catastrophe”. We didn’t do it for the current catastrophe(s) – we won’t do it for future catastrophes.

  12. Orlok_Tsubodai on

    “I’m confident humanity will rally to prevent the catastrophic results of the products I’m actively developing” is a pretty wild stance.

  13. Because the wealthy believe that they’ll be isolated from any of the blow back AI will have.

    “Well, I’m filthy rich even if I lost my CEO job I would be fine. Just early retirement.”

    As if they wouldn’t either be targeted with the rest of the humans by AI, or if AI doesn’t outright destroy humanity the people left will be so pissed off that they target the rich anyway.

    It’s funny, at the best case scenario AI replaces workers and makes them jobless. Well you still need to support them which means people like the google CEO will be forced to pay massive taxes to support UBI otherwise 4 billion humans will revolt and take their pound of flesh.

  14. Humans are the likeliest cause of an extinction level event, followed by meteors. AI may be the next nuclear arms race, but it’s a tool that anyone can wield, not just superpowers.

  15. *Pichai argued that the higher it gets, the more likely that humanity will rally to prevent catastrophe.*

    Yeah totally, bud! Just look at climate change.

  16. BetafromZeta on

    The reality is that you can’t quantify p(doom) well, because its an entirely new frontier and extrapolation doesn’t work that well, particularly in fat-tailed distributions. Sundar knows that, but he knows its unacceptable to say “we have no idea what will happen”.

  17. AI in few years might be capable of creating virus, which would be able to wipe out humanity. The cost to run such operations is minimal, which will allow cults/terror groups to produce weapons of mass destruction at low cost.

    The safeguards on AI are not existent, they struggle to even prevent users from producing porn.

    So yeah, i dont buy his cheap optimism. Even if humanity would “rally to prevent catasthrope” the costs might be still millions/billions of dead. And all of that, because we’re in the middle of world war and countries are rushing the advacements.

    And another thing, what’s happening with AI is utterly undemocratic. It shouldnt be up to the CEOs to decide.

  18. BurningStandards on

    What happens if the AI joins and rallies the people against the CEOs? 🤔

  19. “Let me make untold billions of dollars right now and – if it goes wrong – I’m sure you guys will work *something* out…”

  20. Mayonnaise_Poptart on

    I think it’s far more likely that the consequences of AI will cause humans to cause human extinction. The disruption to our social norms and economic systems will cause unrest that will result in violent conflict between humans.

    We are on the verge of exponential advancement in automation with AI playing a major role. People need to stop asking “can AI do my job?” or and instead ask “What parts of my job can be automated?” I think that almost anyone can come up with significant percentages of their current job responsibilities that could be automated now if their employer just invested in the tech.

    AI will also allow for huge leaps in productivity monitoring. Once that shit hits the fan it’s going to be a bloodbath of middle management do-nothings getting kicked to the curb.

    Humans could use this as an opportunity to reimagine society post-scarcity and move into a higher quality of life for all… but I think we’ll probably just do war.

  21. Instead of hoping we rally, can’t we just not test those waters? Humanity has basically never rallied together around anything. Small groups of people have rallied against other small groups of people, but thats not really the same.

    And when large groups of people do try to rally about something (see 200,000 people marching in budapest, hungary to protest the lgbt rights issues there) nothing really happens. Not acknowledging or responding to protest generally results in the protest going away (case in point, occupy wall street).

  22. “eh I believe you guys got it!” Said from the balcony of a villa over the robot swarms

  23. konigstigerr on

    wow, guy who is invested in ai tells you ai is powerful. i am so surprised, you don’t even know!

  24. I don’t think he’d like what that rally would realistically have to look like

  25. All these ceos give such “hold me back bro” vibes whenever they talk about ai doing something super scary. They just exploit people’s fears of AI from stuff like terminator to get more investment.

  26. IShallRisEAgain on

    Stop falling for this garbage. Its all marketing hype bullshit to convince you that LLMs are AGIs. (Well, there is also the strong possibility that CEOs are dumb enough to actually believe this). LLMs will never evolve into Skynet or whatever. The more likely scenario is that some moron decides that ChatGPT or some other chat client is good enough to monitor equipment and sensors for something dangerous, and when it fails it kills a bunch of people.

  27. bluelifesacrifice on

    Owners will use AI to obliterate the poor, then be surprised when AI enslaves or deletes them.

  28. This implies humans would collectively place human needs over profits. I don’t like our chances.

  29. Says the man at the helm of a company that has immense infuence in how these things play out. What he’s really saying is “I’m going to keep sitting in my chair and continue capitalizing.”

  30. It truly astounds me that these assholes can say this with a straight face and then continue like everything’s normal. If you feel that way, then why do you think we should continue down the path of AI? Why aren’t you trying to ask for regulation? Why aren’t you pushing to limit the use of AI on a national scale? Safety plan?

  31. UniverseBear on

    Ah yes, we’ll rally to prevent catastrophe just like we did for global warming!

  32. spacepoptartz on

    “Humanity will rally to fix all my bullshit mistakes while I get rich off it, it’s ok”

  33. Illustrious-Word2950 on

    I love that he’s optimistic that we will rally to stop the monstrosity that he is contributing to creating.

  34. They’re drumming up hype for some bull shit or its some stupid thing for their stocks

  35. Lol.. I rarely find myself in so much disagreement with someone. Neither do I believe ai causing human extinction risk is as high as he thinks, nor that I believe we would rally together if ai would go hostile on us.

    We, as a species, are cursed with pragmatism. If we see any potential angle for personal benefit, we take calculated risks. In the case of humanity dealing with an hostile ai, there will be at least a small percentage of human population who will deem the risk is acceptable and promote their ideals for good own benefits. For example, may be to become a ruling power over a humanity accepting subjugation. We do this all the time. We would do it with an hostile ai too

  36. Local arsonist says chances of catastrophic fires actually pretty high but optimistic firefighters can put them out.

  37. The_Chubby_Dragoness on

    he’s a fucking idiot in both regards

    LLM won’t kill humanity, and we won’t band together for climate change

  38. It’s pretty unlikely that humanity will collectively rally to prevent catastrophe since we’ve pretty much never collectively rallied to do anything. More likely that the AI would play us off against each other.