The future of this sub is one we need to look at carefully. There is a lot of fear mongering around AI, and the vast, vast majority of it is completely unfounded. I'm happy to answer any questions you may have about why AI will not take over the world and will be responsing to comments as long as I can.

AI is not going to take over the world. The way these programs are written, LLMs included, achieve a very specific goal but are not "generally intelligent". Even the term "general intelligence" is frequently debated in the field; humans are not generally intelligent creatures as we are highly optimised thinkers for specific tasks. We intuitively know how to throw a ball into a hoop, even without knowing the weight, gravitational pull, drag, or anything. However, making those same kinds of estimations for other things we did not evolve to do (how strong is a given spring) is very difficult without additional training.

Getting less objective and more opinionated in my own field (other ml researchers are gonna be split on this part) We are nearing the limit for our current algorithmic technology. LLMs are not going to get that much smarter, you might see a handful of small improvements over the next few years but they will not be substantial– certainly nothing like the jump from GPT2 –> GPT3. It'll be a while before we get another groundbreaking advancement like that, so we really do all need to just take a deep breath and relax.

Call to action: I encourage you, please, please, think about things before you share them. Is the article a legitimate concern about how companies are scaling down workforces as a result of AI, or is it a clickbait title for something sounding like a cyberpunk dystopia?

From the perspective of a Machine Learning Engineer
byu/Th3OnlyN00b inFuturology

Share.

13 Comments

  1. >However, making those same kinds of estimations for other things we did not evolve to do (how strong is a given spring) is very difficult without additional training.

    The fact that we can learn with additional training or experimentation is what makes us a form of a general intelligence. Fluid, model-making, intelligence specifically.

  2. I’m not worried about AI “taking over the world” as much as I’m worried about people who don’t know what they’re doing implementing AI into tasks that it can’t do reliably or safely.

    I will say that in the practical sense, humans have general intelligence and that is largely because of how we define what general intelligence is.

  3. I agree completely with what you are saying about the limitations of current gen LLM “AI”. But I also think the worry about replacing humans is both greatly exaggerated while simultaneously not being taken seriously enough. For example, I’d argue a great many white collar jobs do not require much outside what can be accomplished by an LLM today. I think the next 5 ish years are probably going to be a bit tumultuous as we see a significant shift in job markets, but time will tell if that shift is beneficial or detrimental (I tend to lean towards the former despite having no idea what it will look like).

  4. NoPerformance5952 on

    Lol, what people mean isn’t Skynet will doom is all. They mean executives will force this garbage into every aspect of business/management, even if it is manifestly not made for that use. They are trying to cram it into law, accounting, and almost all other things, while increasing work loads and ravaging employmrnt. Fuck LLM, fuck AI, and we need regulations on this shit before it does something irrevocable, 

    Edit- typo

  5. I think Yann LeCun is right about LLMs hitting a hard ceiling on the path towards AGI. Which is a good thing because it’ll tame our acceleration into the unknown that society is woefully unprepared for. Ironically, I think OpenAI and Meta will spend themselves into oblivion if their bet on AGI is wrong (Meta has a fallback strategy with [VR glasses and porn though](https://mashable.com/article/meta-pirated-porn-ai-training-lawsuit?test_uuid=003aGE6xTMbhuvdzpnH5X4Q&test_variant=a)). Google is hedging by focusing on world simulation applications instead, which is already going to make them dominate video advertising/media, and their DeepMind division will also have promise in biotech/pharma.

    At the same time, the current set of AI tooling gives individuals and smaller orgs a chance to catch up as viable competitors to enterprise solutions. And they’ll be catching up relative to blue chip corporations if the pile of cash being burned on LLMs yields diminishing returns.

  6. Solid-Refrigerator52 on

    But do you think AI will cause the elevation of the mountains to go higher? What I mean by that is, if you look at a chart of historical unemployment, it looks like a series of mountains adjacent to one another. So, cycles of boom and bust, growth and contraction. Is it possible that unemployment due to AI rises to something like 10-12% (at some point whatever future date, 5 years 20 years whatever) and then like you had referred to there’s growth and the unemployment rate comes down, but doesn’t get back down to 3%, 4% or 4 1/2% but like 7% to 7.5 % or something like that?

    [https://fred.stlouisfed.org/series/UNRATE](https://fred.stlouisfed.org/series/UNRATE)

  7. dr_tardyhands on

    There’s also some more legitimate fears that have been raised by the likes of Geoffrey Hinton. E.g. how easy it will be for a single individual with a working knowledge of molecular biology wet lab work to design and create new viruses.

  8. TrueCryptographer982 on

    As I just said elsewhere its incredible how experts in the field can not reliably predict the next 5 years and where we will be but redditors and bloggers can predict with certainty the earth will be a hellscape in 20 years.

    Thank youou for trying to inject some sanity. I have not yet read the comments but I assume the doomsayers are none to happy.

  9. My concerns are (I believe) more in alignment with your characterisation of the abilities and perspective future improvements of AI. Please correct me if I am wrong and give your two cents to the following:

    1. People are overestimating AI, are overusing it, possibly even out of FOMO and are unfortunately ill-suited to judge the validity of the output especially in the spheres that they are most likely to rely on it. Imo this is a big risk factor.

    2. The use of LLMs to produce texts for human consumption is in my opinion profoundly disrespectful, even callous. It’s the service hotline bot issue. No one wants to be on the receiving end of this. Meanwhile I was literally the only person who voted against the city administration adopting AI for public service uses and to produce meeting protocols on our city council (I am also the only council member who is in IT afaik).

    3. The loneliness epidemic, the social media obsession, the dead internet, the short attention span issue, cyber bullying, misinformation and election interference, etc are all slated to be worsened by „AI“ imo.

    4. The fact that the US electricity grid is already a limiting factor for the expansion of the AI market doesn’t bode well. Each time it looks like we are making headway towards a more sustainable energy supply situation we find a new way to waste unprecedented amounts of it.

    5. Most of the output is such slob, it’s even worse than viral marketing used to be. I’m not even forty and I am kind of too old for this shit. I know it’s a new tool and creating actually usable content with it is a skill, but oh boy. It’s like back when word, paint etc were new all over again.

  10. Your comment about springs landed funny with me as I’m a Marine Diesel Engineer and probably can take a decent guess at a spring rate by just looking at one.

    But I’ve been dealing with springs and most mechanical things humans do for over 20 years now you just sort of absorb things.

    A question though how do you work around error bands in technical questions for a LLM? For example I find that they tend to fall flat when faced with a detailed technical questions around the 200 to 300 level engineering courses from a university.

    They are good at giving general guidelines for how to approach a problem but they constantly miss important steps and really fall short when assumptions need to be made for an unknown coefficient of thermal expansion etc.

    Thanks for taking the time to answer questions!

  11. I hate to break it to you, but if we could stop misinformation by posting stuff like this to social media, it would have been erradicated long ago. That said, while I agree that a lot of articles and posts here are nothing more than clickbate nonsense, AI (even in its current state) does pose some very real and serious threats.

  12. >Is the article a legitimate concern about how companies are scaling down workforces as a result of AI, or is it a clickbait title for something sounding like a cyberpunk dystopia?

    Tech companies are spreading absurd predictions about the future of work all the time. I can only imagine they’re doing that to keep the hype alive. It’s only fair that we also use every available avenue to fight back.