AI-generated code contains more bugs and errors than human output

https://www.techradar.com/pro/security/ai-generated-code-contains-more-bugs-and-errors-than-human-output

25 Comments

  1. No shit, I can’t even get it to produce a simple powershell script that works let alone some mammoth coding job…it’s a con job.

  2. It really is hit & miss with AI generated code and you need some proper skills to distinguish which of the two it is each time.

  3. AI can be good tool for a coder for boiler plate code and when used within a smaller context. It’s also good for explaining existing code that doesn’t have too many external dependencies and stuff like that. without a human at the steering wheel it will make a mess. 

    You need to understand the code generative ai produces because it does not understand anything.

  4. AI needs to be used as a helping tool. You cannot code or create by completely relying on AI itself.

  5. Shopping_General on

    Aren’t you supposed to error check code? You don’t just take what an LM gives you and pronounce it great. Any idiot knows to edit what it gives you.

  6. There’s such a repeatable pattern with this stuff, is depressing and so obvious.

    Someone with a name like “MrILoveAI” will say “I used AI to vibecode a million line app that works perfectly” but can’t point to any evidence and calls everyone else a Luddite

    Meanwhile those of us who work in enterprise dev and have tried AI, and realised it hallucinates too much to be more than an interesting toy roll our eyes

    The waters are also muddied because so much of the posts are clearly sales pitches or even bots generated by AI. It’s all a circle jerk at best, Ponzi scheme at worst

  7. as long as we dont have general AI, LLMs are gonna make stupid mistakes day and night cause it doesnt understand at all what is it doing – just picking pieces of puzzle randomly until it fits…

  8. Because a good programmer can tell when his program doesn’t work, he may not know why but he knows when it does and doesn’t work … The problem with AI is that it always assumes it’s output works until you question it and *then* it’ll repeat the same process of assuming the next answer **is** correct

  9. I know nobody in the comments checked the link before commenting, but this article is absolute dog shit. No information about methodology, no context on what models we’re talking about, and no link to the actual “study”.

    I’d say this might as well be a tweet, but even tweets in this category tend to link an actual source.

  10. i am just waiting for the first catastrophes with lost lives. After that this will go the way of the zeppelin I lowkey hope..

  11. The real issue is people treating AI like a senior dev instead of a junior one. If you review it properly, it saves time. If you trust it fully, it creates chaos.

  12. IPredictAReddit on

    Given the hundreds of billions invested in making AI a thing, I expect the next five years to be onslaught of “BUT IT IS CLOSE ENOUGH!!” from these leveraged investors.

    Yeah, it’s got more bugs in it, but look, anyone can now get almost-ready-for-primetime, kinda buggy code! Sure, you need to have the same level of expertise to troubleshoot it as you needed to write it right the first time, but you get to watch the totally-alive-AI-agent-that-has-feelings-and-is-conscious put it together for you! Shut up and pay money for this or the economy will tank!

    Yeah, the self-driving car killed a few people, and does dick moves all over the road, and drives around school busses that are actively dropping off your kids, but our investments depend on you putting up with that, so shut up and bury your kid. Better yet, have more kids so that you can spare a few to the FSD investment gods!

  13. Enjoy.

    New confusing bugs, weird behaviours, hard to reproduce errors.
    Bloated programs requiring more memory and processing power and nobody knowing how to fix it besides ai debugging itself and going in circles.