“If AGI really is coming in two to five years, it gives all of us—companies, society, and governments—precious little time to prepare …
The reason safety is getting short shrift is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market.
In economic terms, this is a market failure—the commercial incentives of private actors encourage them to do things that are bad for the collective whole. Normally, when there are market failures, it would be reasonable to expect the government to step in. But in this case, geopolitics gets in the way.
Of course, having unsafe, uncontrollable AI would be in neither Washington nor Beijing’s interest. So there might be scope for an international treaty. But given the lack of trust between the Trump administration and Xi Jinping, that seems unlikely. It is possible President Trump may yet come around on AI regulation—if there’s a populist outcry over AI-induced job losses or a series of damaging, but not catastrophic, AI-involved disasters. Otherwise, I guess we just have to hope the AI companies’ timelines are wrong.”
floopsyDoodle on
Getting shorter with literally no reason as we don’t even know what API would look like or how it will “happen”…
I use the AI daily for work and it’s not improving by leaps and bounds, it still halucinates, it still has no concept that it can be wrong (Deep Seek’s “reasoning” model does to some degree), it’s just silly PR from AI companies trying to get more VC money…
shadowrun456 on
“AI safety” is a scam, perpetrated for the sole reason of centralizing the control of AI in the hands of mega-corporations. It’s preparing the public to support the bans on open-source uncensored AIs, which could truly empower everyone equally, instead of giving only billionaires unprecedented power and control over everyone else.
Specialist_Power_266 on
This is like seeing a wall of fire, getting closer and closer to you, but you have completely zero ability to get the hell out of the way. World governments have been co-opted to the point of complete impotence by big tech that catastrophe is inevitable.
IgnoranceIsTheEnemy on
AI development follows game theory. It’s a developmental prisoners dilemma. Whoever cracks AGI first has a massive advantage.
Anyone limiting it loses to a second party that doesn’t impose the same restrictions.
TFenrir on
In all threads like this on futurology, I am going to try and encourage people whose natural inclination is to dismiss this as “hype”, to ask themselves, what they would need to see happen over the next couple of years before they changed their mind.
Ideally, some signs that would happen at least a couple of years out from a deadline to intervene or to start making significant changes to how our world works.
Right now, you have a significant amount of leading researchers, politicians, forecasters, etc all ringing alarm bells, and the amount of people who fall in that camp is rising rapidly and the amount who are very skeptical are shrinking.
The research itself is compelling, and I know most people don’t want to go and read papers and read the arguments by researchers who are saying that this future is increasingly likely, but at the very least it’s worth trying to get an idea of the full arguments being made.
The first is a much easier read, and much more about trying to paint a plausible scenario for what many researchers envision as the most “rapid” pace of acceleration.
mavven2882 on
Everything I’ve seen about AI replacing jobs seems more like “investor speak” to generate market hype for something that doesn’t (or can’t) even exist yet. AI has shown time and time again to often be egregiously incorrect, misleading, suffering hallucinations, etc.
While AI feels like a great tool to aid you in your job, outright replacing people seems to be what they want you to believe vs what’s actually happening. Has anyone out there actually seen the true examples of AI replacing human jobs other than shitty call center positions and chat bots?
UnpluggedUnfettered on
“I know the stock market is rough right now, but don’t stop vomiting money into our Nvidia shares . . . because, for real, we are just about to . . . blow. Your. Mind. Just, gonna need your investment to get there. Totally gonna pay off. Of course I would tell you if it wasn’t going to happen! Hurry up though, I have a lot of doors left to knock on.” — AI companies
davesr25 on
Ask, A.I to simulate a script attacking other A.I, but pass it through another A.I, back and forth, I was messing with it, don’t know enough to fully understand what it was firing out at me but it looked interesting, was doing it in python, am I allowed to post script here can add a few parts to the chat ?
Ithirahad on
They are getting shorter, as these firms become more desperate for continued investor funding. The burst will be legendary.
TerriKozmik on
I will believe ij AI when it has designated billionaries and certain corporations as a threat to humanity and terrorists.
Imthewienerdog on
Good, less restrictions on technology is good. We are in a new world and we gotta adapt or we gonna fail.
D_is_for_Dante on
Why would AGI come sooner? It won’t be based on some random LLM that mimics reasoning.
Same as Fusion Power that will come in 5 years since 50 years.
ADisappointingLife on
Can confirm.
I red-team models, professionally, and they’re just as breakable now as they were on day one.
Moreso, if you consider all the different modalities that can be broken & abused.
It’s mostly down to how creative you are – which is a scary thought, because psychopaths tend to be pretty creative.
CondiMesmer on
People are still falling for this grift?
We have absolutely no evidence or anything pointing towards AGI. It’s currently science-fiction.
AI “safety” is a grift that hopefully most people are through at this point. Any damage possible is already out there, there’s no preventing it, and it just vilifies open-source since that could be used to bypass “safety”.
Which who decides safety anyways? Big corps of course, who would love to ban open-source competition. It’s just an anti-competitive grift plain and simple.
wiiinks on
It’s amazing that they have spent this amount of money and energy on creating these LLMs that can’t come close to coding as well as an average senior and they still say this stuff
xxAkirhaxx on
Stop posting this shit, it’s not close. I’ll admit we have cool shit coming, but what we have is not AGI. We are far, far, far, far from AGI. If you know how this stuff works under the hood you’ll know exactly how to exploit it and why it isn’t AGI.
rooygbiv70 on
The definitions I’ve seen these corps outline for AGI are super underwhelming imo. Like oh now it’s AGI because we squeezed out enough of the diminishing improvements to LLMs to inch the models above some particular benchmark? It honestly feels like “AGI” is just something they are keeping in their pocket to declare victory on whenever they need another big hype injection.
SavePeanut on
Just watched Willy Wonka from the 70s and they had an AI computer with all the answers in that movie too… and in the 50s AI robots were just around the corner too…
TheBiblePimp on
AI is a scam folks. Just a tool, it’s not the silver bullet.
AtariAtari on
These comments help get more VC funding. 10 years ago AI replaced all radiologists.
hervalfreire on
It feels like it’s been a while since Altman claimed AGI would happen any time soon – his 2024 predictions were that it was coming in 2025.
The big players – google, Microsoft, anthropic and OpenAI – are all visibly doubling down into the workflow that seems to generate revenue (coder agents and/or copilots). Quietly dropping the AGI hype and trying to buy out cash cows like Cursor. Fine tuning coding models (4.1, flash 2.5). Hand waving that “true AGI is in the 2030s” again
I guess the reality of LLM limitations is finally catching up?
22 Comments
“If AGI really is coming in two to five years, it gives all of us—companies, society, and governments—precious little time to prepare …
The reason safety is getting short shrift is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market.
In economic terms, this is a market failure—the commercial incentives of private actors encourage them to do things that are bad for the collective whole. Normally, when there are market failures, it would be reasonable to expect the government to step in. But in this case, geopolitics gets in the way.
The U.S. sees AGI as a strategic technology that it wants to obtain before any rival, particularly China. So it is unlikely to do anything that might slow the progress of the U.S. AI labs—even a little bit. (It doesn’t help that AI lab CEOs such as Altman—who once went before Congress and endorsed the idea of government regulation, [including possible licensing requirements for leading AI labs](https://archive.is/o/O5dMK/https://www.wsj.com/tech/ai/chatgpts-sam-altman-faces-senate-panel-examining-artificial-intelligence-4bb6942a), but now says he thinks [AI companies can self-regulate on AI safety](https://archive.is/o/O5dMK/https://youtu.be/5MWT_doo68k)—are lobbying the government to eschew any legal requirements.)
Of course, having unsafe, uncontrollable AI would be in neither Washington nor Beijing’s interest. So there might be scope for an international treaty. But given the lack of trust between the Trump administration and Xi Jinping, that seems unlikely. It is possible President Trump may yet come around on AI regulation—if there’s a populist outcry over AI-induced job losses or a series of damaging, but not catastrophic, AI-involved disasters. Otherwise, I guess we just have to hope the AI companies’ timelines are wrong.”
Getting shorter with literally no reason as we don’t even know what API would look like or how it will “happen”…
I use the AI daily for work and it’s not improving by leaps and bounds, it still halucinates, it still has no concept that it can be wrong (Deep Seek’s “reasoning” model does to some degree), it’s just silly PR from AI companies trying to get more VC money…
“AI safety” is a scam, perpetrated for the sole reason of centralizing the control of AI in the hands of mega-corporations. It’s preparing the public to support the bans on open-source uncensored AIs, which could truly empower everyone equally, instead of giving only billionaires unprecedented power and control over everyone else.
This is like seeing a wall of fire, getting closer and closer to you, but you have completely zero ability to get the hell out of the way. World governments have been co-opted to the point of complete impotence by big tech that catastrophe is inevitable.
AI development follows game theory. It’s a developmental prisoners dilemma. Whoever cracks AGI first has a massive advantage.
Anyone limiting it loses to a second party that doesn’t impose the same restrictions.
In all threads like this on futurology, I am going to try and encourage people whose natural inclination is to dismiss this as “hype”, to ask themselves, what they would need to see happen over the next couple of years before they changed their mind.
Ideally, some signs that would happen at least a couple of years out from a deadline to intervene or to start making significant changes to how our world works.
Right now, you have a significant amount of leading researchers, politicians, forecasters, etc all ringing alarm bells, and the amount of people who fall in that camp is rising rapidly and the amount who are very skeptical are shrinking.
The research itself is compelling, and I know most people don’t want to go and read papers and read the arguments by researchers who are saying that this future is increasingly likely, but at the very least it’s worth trying to get an idea of the full arguments being made.
A couple of good reads would be
https://ai-2027.com/
https://situational-awareness.ai/
The first is a much easier read, and much more about trying to paint a plausible scenario for what many researchers envision as the most “rapid” pace of acceleration.
Everything I’ve seen about AI replacing jobs seems more like “investor speak” to generate market hype for something that doesn’t (or can’t) even exist yet. AI has shown time and time again to often be egregiously incorrect, misleading, suffering hallucinations, etc.
While AI feels like a great tool to aid you in your job, outright replacing people seems to be what they want you to believe vs what’s actually happening. Has anyone out there actually seen the true examples of AI replacing human jobs other than shitty call center positions and chat bots?
“I know the stock market is rough right now, but don’t stop vomiting money into our Nvidia shares . . . because, for real, we are just about to . . . blow. Your. Mind. Just, gonna need your investment to get there. Totally gonna pay off. Of course I would tell you if it wasn’t going to happen! Hurry up though, I have a lot of doors left to knock on.” — AI companies
Ask, A.I to simulate a script attacking other A.I, but pass it through another A.I, back and forth, I was messing with it, don’t know enough to fully understand what it was firing out at me but it looked interesting, was doing it in python, am I allowed to post script here can add a few parts to the chat ?
They are getting shorter, as these firms become more desperate for continued investor funding. The burst will be legendary.
I will believe ij AI when it has designated billionaries and certain corporations as a threat to humanity and terrorists.
Good, less restrictions on technology is good. We are in a new world and we gotta adapt or we gonna fail.
Why would AGI come sooner? It won’t be based on some random LLM that mimics reasoning.
Same as Fusion Power that will come in 5 years since 50 years.
Can confirm.
I red-team models, professionally, and they’re just as breakable now as they were on day one.
Moreso, if you consider all the different modalities that can be broken & abused.
It’s mostly down to how creative you are – which is a scary thought, because psychopaths tend to be pretty creative.
People are still falling for this grift?
We have absolutely no evidence or anything pointing towards AGI. It’s currently science-fiction.
AI “safety” is a grift that hopefully most people are through at this point. Any damage possible is already out there, there’s no preventing it, and it just vilifies open-source since that could be used to bypass “safety”.
Which who decides safety anyways? Big corps of course, who would love to ban open-source competition. It’s just an anti-competitive grift plain and simple.
It’s amazing that they have spent this amount of money and energy on creating these LLMs that can’t come close to coding as well as an average senior and they still say this stuff
Stop posting this shit, it’s not close. I’ll admit we have cool shit coming, but what we have is not AGI. We are far, far, far, far from AGI. If you know how this stuff works under the hood you’ll know exactly how to exploit it and why it isn’t AGI.
The definitions I’ve seen these corps outline for AGI are super underwhelming imo. Like oh now it’s AGI because we squeezed out enough of the diminishing improvements to LLMs to inch the models above some particular benchmark? It honestly feels like “AGI” is just something they are keeping in their pocket to declare victory on whenever they need another big hype injection.
Just watched Willy Wonka from the 70s and they had an AI computer with all the answers in that movie too… and in the 50s AI robots were just around the corner too…
AI is a scam folks. Just a tool, it’s not the silver bullet.
These comments help get more VC funding. 10 years ago AI replaced all radiologists.
It feels like it’s been a while since Altman claimed AGI would happen any time soon – his 2024 predictions were that it was coming in 2025.
The big players – google, Microsoft, anthropic and OpenAI – are all visibly doubling down into the workflow that seems to generate revenue (coder agents and/or copilots). Quietly dropping the AGI hype and trying to buy out cash cows like Cursor. Fine tuning coding models (4.1, flash 2.5). Hand waving that “true AGI is in the 2030s” again
I guess the reality of LLM limitations is finally catching up?