One chilling forecast of our AI future is getting wide attention. How realistic is it? – Rapid changes from AI may be coming far faster than you imagine.

https://www.vox.com/future-perfect/414087/artificial-intelligence-openai-ai-2027-china

Share.

10 Comments

  1. From the article

    By 2027, enormous amounts of compute power would be dedicated to AI systems doing AI research, all of it with dwindling human oversight — not because AI companies don’t *want* to oversee it but because they no longer can, so advanced and so fast have their creations become. The US government would double down on winning the arms race with China, even as the decisions made by the AIs become increasingly impenetrable to humans.

  2. sciolisticism on

    > Let’s imagine for a second that the impressive pace of AI progress over the past few years continues for a few more. 

    Well there’s your first problem.

  3. Capital_Sherbert9049 on

    If things continue at this rapid pace we will have created an artificial general intelligence by 2025

    – Sam Altman.

  4. The 2027-2030 range is when most industry people seem to have settled on.

    There are some 2040 outsiders, but a lot of people who had AGI/ASI at 100+ years or never are moving to 2050.

    Honestly, there are a lot of people that have seemed to settle on the 2030 date +/- a year or two.

  5. Oh no! Is the horse and buggy going to go away too? And human calculators? This is madness.

  6. Uh, we have already seen what can happen when a group of humans use AI to train another AI… it’s how DeepSeek was trained.

    And yes, it was utterly amazing. It lead to all sorts of things that OpenAI, Google etc etc hadn’t been doing. Like, some students at Berkley running a copy of DeepSeek on a RasberryPi.

    Then factor in all of the opensource AI agents that had been running around on the internet for the last 2 years before OpenAI released their version.

    We have UTTERLY no clue how much of the internet is now AI producing stuff without ANY human driving it any more. For all we know, there are whole content creation companies out there that are just AI running the whole thing. It wouldn’t even be that friggen hard with all the tools out there for humans to use. All you have to do is give AI a way to make money and set the initial stuff up and bam! Off it goes.

    There is even a guy that did this with Facebook and using the API for ChatGPT… I can’t even remember how much money people have given it. Last I heard (like 4 months ago) he was debating giving it a way to spend the money it had made.

    We don’t need what people consider AGI to take over the world…. we just need LLMs with the ability to interact with the internet.

    …Which we have already.

    It doesn’t have to be self aware / sentient to run the world people.

    Note: I’m not saying it will do a GOOD job of running the world. But, it can’t really do much worse then we humans have been doing.

  7. Radical change is coming very very quickly

    The first large scale contracts for industrial humanoid robots are already in place and the numbers will grow more and more quickly. At least 2 of the companies are targeting 2027 for the first domestic models.

    By 2030 the basic way people live their lives will already be radically changing and thats just the initial shocks of one use case.

    It will be here in the blink of an eye. We are boomers in 1985 talking about these new computer thingies.

  8. MarcMurray92 on

    Has anybody in this sub used a computer before? So many bullshit sensationalist headlines

  9. MediocreClient on

    Look, I hate to be the type of grumpy curmudgeon that dismisses an entirely new category of tech as “hype”, but the only thing more depressing than the article is the comment section. LLMs may be ‘improving’, but the cost of resources we’re pouring into them to make them more efficient, and the opportunity cost of continuing to do so is rising rapidly. Exponentially, even. But here’s the kicker: *we’re not getting anything for it*.

    Outside of generating ragebait and fake political content to rope in geriatrics and Gen Xers on social media, LLMs, and all of the frontloaded and backend expense of them, are not actually *doing anything*. Sure, there are a few things that LLMs and/or diffusion models turned out to be wildly skilled at; unfolding proteins are just such an example. *Crazily* good at it. A genuine waterwhed moment for people in the protein-unfolding scene. But that is an absolute edge point with no transference.

    LLMs are a global waste case at the industry level. Business operators, and the shareholders behind them, are rapidly growing frustrated and disillusioned with the LLM solutions they’ve been scrambling to deploy… Because the fucking things break down *constantly*. All of the time. And guess what? As LLMs “evolve” and “improve”, those breakdowns are happening faster, and more frequently, with each iteration. The cost of those breakdowns is now piling up *quickly*, and the overwhelming majority of LLMs deployments are not increasing revenue.

    It turns out, LLMs are terrible at both downstreaming and upstreaming, which is an incredibly…. ‘unique’ problem for a tech solution that is meant to change the world. LLMs are single-transaction entities. Except for those pesky edge cases, LLM output overwhelmingly needs to be ‘the final product’, and god help you if you need two or more LLMs to *interact* in the real world.

    Odds are high that everybody in this thread works at a company that has deployed some version of an LLM to do *something* in the company, for no other reason than they got hit with the hypetrain. And odds are equally high that those companies are spending a lot of time and even more money behind the scenes quietly trying to keep the fucktrain on the fuckrails. And it isn’t going well.

    Back in the day, machine learning and neural networks hit Wall Street like a fucking wildfire. It revolutionized large-scale finance on a timeline that is, depending on how you define events, measured in months. Finance in general looked at generative LLMs years ago, and placed it firmly in the speculative investment camp, with zero interest in structural deployment, and that is *very* telling for an industry built from the ground up on crunching data to the bone. Any article you read about Wall Street using LLMs is falsely equating Chat Gippity with ML/DL/NN models that have existed for years to decades. Why? I don’t know. Stolen valour possibly.

    The fields of law and accounting have been experimenting with LLMs, but they are also quickly(slowly) discovering that they can do everything an LLM *says* it can do with a 30-year-old ML/NN, at the same speed, for a tenth of the price and a thousandth of the compute cost.

    LLMs are a massive resource and capital draw, but that is not indicative of usefulness. I have no doubt they’ll be around for a long time, in some iteration or another. They’ve already made their mark and planted their flag; but there’s still no water on this particular moon, no matter how quickly LLMs can scan the rocks.

    Televisions, telephones, smartphones, home computers, the internet… These things all ‘existed’ for a while while they grew, and then went from “mainstream” to “ubiquitous” at an incredibly rapid pace. LLMs have been around for a while (a lot longer than people think, in fact), but their “mainstream” moment came and went nearly a decade ago. You know what the keynote presentation at every LLM summit I’ve seen for the past two years running has been? “How to find revenue streams for LLM solutions”. It is an industry of hammer-makers begging for somebody to invent the right nails.