“Suleyman emphasizes that without rigorous controls, AI could spiral beyond human oversight, posing existential risks to society.
Suleyman’s concerns are not abstract; they stem from concrete observations of current AI trajectories. In a recent discussion reported by The Independent, he stated bluntly, “If we can’t control it, it isn’t going to be on our side.” This sentiment echoes across multiple platforms, highlighting a growing unease among industry leaders. Suleyman argues that AI systems, if allowed to become “uncontrollable,” could lead to unintended consequences, from economic disruption to broader societal upheaval. He points to the accelerating pace of development, where models are trained on massive datasets and computational power, potentially enabling self-improvement loops that humans might not anticipate or halt.”
“He envisions a future where AI amplifies human potential, but only if risks are mitigated early. “
inverseinternet on
It’s not really Co-Pilot that I’m worried about given how crap it is and no-one really using it anyway.
MyNameIsLOL21 on
I like to think that AGI already exists and it quickly realised humans are not compatible with perfection, so it is sabotaging every aspect of society in hopes that we will destroy ourselves.
buttymuncher on
Keeps pushing AI down our throats in every possible way…urges regulation and restrictions. Pricks.
Island_Monkey86 on
There is far to much a stake to put on the breaks. Ultimately, in the eyes of people like Sam Altman the race for AGI is a race to see who creates a superpower that is likely to replace most, if not all human work forces giving the winner total do dominance.
Not a chance in hell that they will put on the breaks.
amurica1138 on
Between this guy and Bill Gates sounding alarms, it’s almost like MSFT is trying to build some defense against future liability.
fredlllll on
“please regulate us so we can blame the goverment for our failure!!”
WhyYesThisIsFake on
“What an unexpected and totally unforeseen consequence of the actions I took!”
Mue_Thohemu_42 on
If Microsoft is against it then it’s probably something good. They’re a terrible company.
Oilpaintcha on
Spoiler: like every big corporation, they’ve written the regulations they want ratified already, and are seeking/paying politicians to put in place.
zauraz on
While supporting AI? Arguably the biggest most pointless reinforcer of climate change?
Kimantha_Allerdings on
Every time I hear some AI tech big-wig say some shit like this my first thought is that its primary purpose is advertising. “Yes, this technology is *totally* going to be capable of doing all these things at some indeterminate point in the future!”
rope_6urn on
Leave it to humans to make humans obsolete. Bring me back to the early 90’s.
piratecheese13 on
It’s more clear than ever that no body currently holds enough power to enforce laws worldwide. Digital or otherwise
It’s also clear that Microsoft knows this and any calls for regulation made by groups actively lobbying against regulation are essentially just PR/ advertising
kemma_ on
The only thing we have to fear is big corps and new norm of infinite lying governments
iDoMyOwnResearchJK on
They must’ve realized that they’re too far behind their competition and need to slow them down enough to catch up.
AdviceNotAskedFor on
I feel like they are now embedded into everything NOW they want regulation so some young upstart can’t come steal their crown.
JimAbaddon on
I’d rather just shut it all down instead of regulating it.
tlst9999 on
Microsoft after already making their AI: We have to stop others from making their AI too.
This is regulatory capture in action. Get a tremendous advantage from exploiting legal vacuums. Fight for regulation once you’ve got your bag.
iloveshw on
When a person in power, able to implement what he’s proposing in a big way, calls for regulation – it’s either a PR move to show as the good guy while knowing it’s not gonna happen (or even behind the scenes lobbying for it). Or they already are in a place that it won’t matter to them and the regulation is for the competition, especially the new players that just joined or will join their market.
flamingmenudo on
All these articles are just feeding into the hype to keep money flowing into all the LLM companies right now. The “existential” risk is actually the chance that Ooen AI crashes and burns, taking billions of Microsoft’s money with it (and crashing the rest of the tech industry).
spinur1848 on
So they’ve concluded they won’t win the competition for the most existentially dangerous investment, and now they want all their competitors to agree to give it up.
If we can’t figure out how to prevent this kind of corporate behaviour before they cause harm instead of afterwards, we’re headed for the Fermi filter…
jon_the_mako on
I keep building this thing. I’m not gonna stop cause I get too much money. Regulate me, daddy.
I hate these CEOs.
Masterventure on
AI can’t even order a pizza.
I mean, crazy nazi tech CEOs like Larry Ellison, Elon Musk and Peter Thiel might be a existential risk to humanity and they might use AI for their crazy nazi plans, but the advanced auto correct isn’t going to do shit unless we let the lunatics in charge do so.
Danny-Fr on
“We heard you, you don’t like AI, so here are regulations so now only I and my friends can use it.
Haha no joking but actually no we’re gonna lobby everyone, impose our standards and make you pay for the stuff we’ve been shoving in your OS because we need to foot the bills of our own auditors.
And you’ll sign all your content with your ID, too.
25 Comments
“Suleyman emphasizes that without rigorous controls, AI could spiral beyond human oversight, posing existential risks to society.
Suleyman’s concerns are not abstract; they stem from concrete observations of current AI trajectories. In a recent discussion reported by The Independent, he stated bluntly, “If we can’t control it, it isn’t going to be on our side.” This sentiment echoes across multiple platforms, highlighting a growing unease among industry leaders. Suleyman argues that AI systems, if allowed to become “uncontrollable,” could lead to unintended consequences, from economic disruption to broader societal upheaval. He points to the accelerating pace of development, where models are trained on massive datasets and computational power, potentially enabling self-improvement loops that humans might not anticipate or halt.”
“He envisions a future where AI amplifies human potential, but only if risks are mitigated early. “
It’s not really Co-Pilot that I’m worried about given how crap it is and no-one really using it anyway.
I like to think that AGI already exists and it quickly realised humans are not compatible with perfection, so it is sabotaging every aspect of society in hopes that we will destroy ourselves.
Keeps pushing AI down our throats in every possible way…urges regulation and restrictions. Pricks.
There is far to much a stake to put on the breaks. Ultimately, in the eyes of people like Sam Altman the race for AGI is a race to see who creates a superpower that is likely to replace most, if not all human work forces giving the winner total do dominance.
Not a chance in hell that they will put on the breaks.
Between this guy and Bill Gates sounding alarms, it’s almost like MSFT is trying to build some defense against future liability.
“please regulate us so we can blame the goverment for our failure!!”
“What an unexpected and totally unforeseen consequence of the actions I took!”
If Microsoft is against it then it’s probably something good. They’re a terrible company.
Spoiler: like every big corporation, they’ve written the regulations they want ratified already, and are seeking/paying politicians to put in place.
While supporting AI? Arguably the biggest most pointless reinforcer of climate change?
Every time I hear some AI tech big-wig say some shit like this my first thought is that its primary purpose is advertising. “Yes, this technology is *totally* going to be capable of doing all these things at some indeterminate point in the future!”
Leave it to humans to make humans obsolete. Bring me back to the early 90’s.
It’s more clear than ever that no body currently holds enough power to enforce laws worldwide. Digital or otherwise
It’s also clear that Microsoft knows this and any calls for regulation made by groups actively lobbying against regulation are essentially just PR/ advertising
The only thing we have to fear is big corps and new norm of infinite lying governments
They must’ve realized that they’re too far behind their competition and need to slow them down enough to catch up.
I feel like they are now embedded into everything NOW they want regulation so some young upstart can’t come steal their crown.
I’d rather just shut it all down instead of regulating it.
Microsoft after already making their AI: We have to stop others from making their AI too.
This is regulatory capture in action. Get a tremendous advantage from exploiting legal vacuums. Fight for regulation once you’ve got your bag.
When a person in power, able to implement what he’s proposing in a big way, calls for regulation – it’s either a PR move to show as the good guy while knowing it’s not gonna happen (or even behind the scenes lobbying for it). Or they already are in a place that it won’t matter to them and the regulation is for the competition, especially the new players that just joined or will join their market.
All these articles are just feeding into the hype to keep money flowing into all the LLM companies right now. The “existential” risk is actually the chance that Ooen AI crashes and burns, taking billions of Microsoft’s money with it (and crashing the rest of the tech industry).
So they’ve concluded they won’t win the competition for the most existentially dangerous investment, and now they want all their competitors to agree to give it up.
If we can’t figure out how to prevent this kind of corporate behaviour before they cause harm instead of afterwards, we’re headed for the Fermi filter…
I keep building this thing. I’m not gonna stop cause I get too much money. Regulate me, daddy.
I hate these CEOs.
AI can’t even order a pizza.
I mean, crazy nazi tech CEOs like Larry Ellison, Elon Musk and Peter Thiel might be a existential risk to humanity and they might use AI for their crazy nazi plans, but the advanced auto correct isn’t going to do shit unless we let the lunatics in charge do so.
“We heard you, you don’t like AI, so here are regulations so now only I and my friends can use it.
Haha no joking but actually no we’re gonna lobby everyone, impose our standards and make you pay for the stuff we’ve been shoving in your OS because we need to foot the bills of our own auditors.
And you’ll sign all your content with your ID, too.
You’re welcome.”