This article highlights a growing consensus among AI researchers: scaling alone will not lead to AGI. Despite massive investments from tech giants, diminishing returns are becoming apparent.
If scaling fails, what comes next for AI? Will we see new architectures emerge, or will AI become more integrated and procedural rather than aiming for general intelligence?
Narrascaping on
Silicon Valley has treated intelligence like a divine mystery: more data, more compute, more infrastructure—less intelligence.
But even the AI priesthood (researchers) are admitting that this approach is failing. Scaling isn’t intelligence; it’s a narrative to centralize power and control access to AI. An idol to be worshipped.
What happens when idols fall? Are the remainder smashed, or are new ones constructed?
It is not what replaces scaling, but who profits from the replacement that matters.
Bitter_Internal9009 on
There’s a huge disconnect between the ai researchers who are trying to create a sentient being and those trying the make a tool for espionage, art industrialization and crypto scams. The former is getting nothing in terms of funding while the latter is getting billions.
M4K4SURO on
You can’t compute true intelligence, at least not with classical computers.
Sunflier on
>But we are so desperate to replace people. All the money for us, none for you.
-Investors on why they are pouring money into the pit.
colinallbets on
Achieving AGI is futile; we don’t even have an agreed upon definition of human consciousness.
wadejohn on
It might not end up doing everything that we think it would, but it would be a progression from where we are today
Sakkyoku-Sha on
Every time a “new model” drops, there are always people claiming both that “it’s so shit now” and “It’s so much better!”. As someone who has been using them to assist with code gen for the past 3 years, I’d honestly say the rate of improvement has been far less impressive than what press releases might have you believe. You could have “Vibe Coded” 2 years ago and gotten very similar results to what we have today.
One of the reasons that A.I has gotten so good at coding so quickly is that there is just a ton of available code in a format that is easy to train on. However for that reason, people have already scraped most of everything there is to train on. There isn’t just another 40TB of training data set they can get. This isn’t to mention the fact that A.I training on A.I generated code will actually lead to reduction in quality. Right now It’s more about trying to build new ways to train on the data they already have, and doesn’t seem like they’ve made a meaningful break through on that end just yet.
penneacarbonausea on
People sometimes forget that Moore’s Law is no longer possible. Hardware is increasingly approaching its limits.
If I want to create a text using my own creativity, I can use a 90s laptop running Windows 95 to do so. If I want to create the same text, but using artificial intelligence, I will need a computer system many orders of magnitude larger than the obsolete computer in the example.
Now imagine Pixar’s computer system. It is larger than that of many large companies in Silicon Valley. It needs a huge system to create films using human creativity. Imagine how many orders of magnitude this system will need to increase to create films without human labor? It will not be worth it financially.
But AI will continue to be very useful for smaller tasks, such as creating reports, writing documents, organizing information, etc.
maritimelight on
Anyone who has studied the philosophy of mind or language has been saying as much since the very advent of LLMs. I’m reminded of Sam Harris, someone with a neuroscience background who started making strong claims about science’s ability to guide morality without understanding a fundamental aspect of ethics, and thereafter ignored all criticism from people who have actually studied the field because… he was selling books after all (and quite a lot of them).
In the same way, LLMs are just a product to sell, and to sell that product you need a narrative, and the narrative that sells it is that it will become AGI if… they get enough money.
The product isn’t intelligence, it is the illusion of intelligence (stochastic parrot). It is an excuse—for cost-cutting, for lowering standards, for surveillance and IP theft in the name of data farming, for charging for a new subscription tier, asking for new contracts, etc. etc.
LLMs are a grift, not because they don’t have a use, but because the narrative and expectations around them are bullshit. If you buy into them beyond their capabilities as glorified search engines (the results of which you should still verify against hallucinations anyway, if you have any actual intellectual integrity), you are a mark.
1_H4t3_R3dd1t on
Actually they are pouring too much into something before it continues a cycle of innovation that actually makes it a viable product.
Wololo2502 on
language models is a fascinating advancement. But there are more discoveries to be made. It is not correct to say that ai is a dead end technology.
ReasonablyBadass on
Do people really think that scaling is everything that is being tried, that new models aren’t being developed and tested?
GurthNada on
This has less to do with AI itself and more with how investors behave and how the offer adjusts to this behavior. If antibiotics, steam engine or airplane were invented today, you’ll also see billions being poured in dead ends related to these fields – and for all I know, maybe that’s exactly what happened at the time.
TerryLO439 on
I think we’re going to see a repeat of history with another AI winter coming. I don’t know when that’s coming but it’s coming or maybe even starting I don’t know yet but we’ll see. The bubble is going to eventually burst.
DunkingDognuts on
Just think of the benefits had they taken those billions and poured them into developing people instead.
kataflokc on
So, are these the same researchers telling us that AI is going to take over the world and needs to be regulated as a menace to society?
17 Comments
This article highlights a growing consensus among AI researchers: scaling alone will not lead to AGI. Despite massive investments from tech giants, diminishing returns are becoming apparent.
If scaling fails, what comes next for AI? Will we see new architectures emerge, or will AI become more integrated and procedural rather than aiming for general intelligence?
Silicon Valley has treated intelligence like a divine mystery: more data, more compute, more infrastructure—less intelligence.
But even the AI priesthood (researchers) are admitting that this approach is failing. Scaling isn’t intelligence; it’s a narrative to centralize power and control access to AI. An idol to be worshipped.
What happens when idols fall? Are the remainder smashed, or are new ones constructed?
It is not what replaces scaling, but who profits from the replacement that matters.
There’s a huge disconnect between the ai researchers who are trying to create a sentient being and those trying the make a tool for espionage, art industrialization and crypto scams. The former is getting nothing in terms of funding while the latter is getting billions.
You can’t compute true intelligence, at least not with classical computers.
>But we are so desperate to replace people. All the money for us, none for you.
-Investors on why they are pouring money into the pit.
Achieving AGI is futile; we don’t even have an agreed upon definition of human consciousness.
It might not end up doing everything that we think it would, but it would be a progression from where we are today
Every time a “new model” drops, there are always people claiming both that “it’s so shit now” and “It’s so much better!”. As someone who has been using them to assist with code gen for the past 3 years, I’d honestly say the rate of improvement has been far less impressive than what press releases might have you believe. You could have “Vibe Coded” 2 years ago and gotten very similar results to what we have today.
One of the reasons that A.I has gotten so good at coding so quickly is that there is just a ton of available code in a format that is easy to train on. However for that reason, people have already scraped most of everything there is to train on. There isn’t just another 40TB of training data set they can get. This isn’t to mention the fact that A.I training on A.I generated code will actually lead to reduction in quality. Right now It’s more about trying to build new ways to train on the data they already have, and doesn’t seem like they’ve made a meaningful break through on that end just yet.
People sometimes forget that Moore’s Law is no longer possible. Hardware is increasingly approaching its limits.
If I want to create a text using my own creativity, I can use a 90s laptop running Windows 95 to do so. If I want to create the same text, but using artificial intelligence, I will need a computer system many orders of magnitude larger than the obsolete computer in the example.
Now imagine Pixar’s computer system. It is larger than that of many large companies in Silicon Valley. It needs a huge system to create films using human creativity. Imagine how many orders of magnitude this system will need to increase to create films without human labor? It will not be worth it financially.
But AI will continue to be very useful for smaller tasks, such as creating reports, writing documents, organizing information, etc.
Anyone who has studied the philosophy of mind or language has been saying as much since the very advent of LLMs. I’m reminded of Sam Harris, someone with a neuroscience background who started making strong claims about science’s ability to guide morality without understanding a fundamental aspect of ethics, and thereafter ignored all criticism from people who have actually studied the field because… he was selling books after all (and quite a lot of them).
In the same way, LLMs are just a product to sell, and to sell that product you need a narrative, and the narrative that sells it is that it will become AGI if… they get enough money.
The product isn’t intelligence, it is the illusion of intelligence (stochastic parrot). It is an excuse—for cost-cutting, for lowering standards, for surveillance and IP theft in the name of data farming, for charging for a new subscription tier, asking for new contracts, etc. etc.
LLMs are a grift, not because they don’t have a use, but because the narrative and expectations around them are bullshit. If you buy into them beyond their capabilities as glorified search engines (the results of which you should still verify against hallucinations anyway, if you have any actual intellectual integrity), you are a mark.
Actually they are pouring too much into something before it continues a cycle of innovation that actually makes it a viable product.
language models is a fascinating advancement. But there are more discoveries to be made. It is not correct to say that ai is a dead end technology.
Do people really think that scaling is everything that is being tried, that new models aren’t being developed and tested?
This has less to do with AI itself and more with how investors behave and how the offer adjusts to this behavior. If antibiotics, steam engine or airplane were invented today, you’ll also see billions being poured in dead ends related to these fields – and for all I know, maybe that’s exactly what happened at the time.
I think we’re going to see a repeat of history with another AI winter coming. I don’t know when that’s coming but it’s coming or maybe even starting I don’t know yet but we’ll see. The bubble is going to eventually burst.
Just think of the benefits had they taken those billions and poured them into developing people instead.
So, are these the same researchers telling us that AI is going to take over the world and needs to be regulated as a menace to society?