Share.

13 Comments

  1. From the article: Major tech players have spent the last few years betting that simply throwing more computing power at AI will lead to artificial general intelligence (AGI) – systems that match or surpass human cognition. But a recent survey of AI researchers suggests growing skepticism that endlessly scaling up current approaches is the right path forward.

    A recent survey of 475 AI researchers reveals that 76% believe adding more computing power and data to current AI models is “unlikely” or “very unlikely” to lead to AGI.

    The survey, conducted by the Association for the Advancement of Artificial Intelligence (AAAI), reveals a growing skepticism. Despite billions poured into building massive data centers and training ever-larger generative models, researchers argue that the returns on these investments are diminishing.

    Stuart Russell, a computer scientist at UC Berkeley and a contributor to the report, told New Scientist: “The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced.”

  2. servermeta_net on

    I would say that the current ai is good enough to justify the investment, and while they try to improve hopefully they’ll make enough architecture leaps to put the additional capacity to good use

  3. pain_vin_boursin on

    Tired of these types of articles. No one is focusing on just compute as the only vector for improving AI models. Every major improvement in recent years has been driven by architectural innovations

  4. I’m sure companies are burning billions trying to create different architectures for ai’s than the ones we use today.

  5. Yes, you can’t code AGI. Coding (or LLMs or whatever) requires humans to create everything they can do. While they can extrapolate a bit, it is still humans that created that capability.

    Humans cannot create AGI…. well, we can through procreation, lol.

    I’d look into the companies mixing tech with organic cells though, as that’ll be where the next revolution with technology comes from in regards to AI. Only way to have AGI, is by something literally being alive.

  6. StrikingHarper on

    Don’t really know if its scalable or not. But I’ll be the thrilled to see the results, as always it’s so fascinating seeing what comes out of this big investments

  7. No one is doing that. But more compute seems to be a requirement for moving forward *and* is also needed to serve the clients and to be able to iterate faster/be able to do more experiments.

  8. ThinkItSolve on

    I believe quantum AI is the future so these current binary systems of more compute is essentially pointless. However, the capabilities of what we can use AI to do in daily life has a ways to go. AI as a product is still in its infancy. Hardly scratching the surface.

  9. Well, it makes sense. AGI isn’t just a bigger version of the AI’s we have now, which are mostly LLMs (Large Language Models.)

    It’s almost like having an airplane, and then saying “if we keep making bigger airplanes, we can get to Mars, right?” Which sounds logical if you *don’*t think about it too much. But if you think about it, you realize that while both fly, an airplane and a spaceship are two very different things, even though on the outside they have a number of features in common.

  10. zapodprefect55 on

    There is also the cost-benefit ratio. The server farms use crazy amount of electricity and cooling. It’s why Altman wanted fast tracking for nukes. We also need to ask what is AGI going to be. Will we understand it? Does it offer any real benefits?

  11. jefftchristensen on

    Every other month I am blown away with new AI releases. I do not think this trend is going to stop. Even if we stopped developing AI today, All of the new innovations that will come from the existing models is going to change life as we know it.

  12. I think the current LLM architecture will lead to dedicated task agents, and other tailored applications. But language (as written) only exists in half our brains, and as such, this current model will never achieve AGI.

    To be fair, it’s only just recently that the cellular structure of the cortex is being worked out, and the physical structures will provide a better roadmap forward. Although, I think the most effective approach will be in tandem with physical robotics, as so much of our experience, and therefore humanity, is derived from our physical senses.