20 Comments

  1. engineered_academic on

    So essentially it just compresses the attack timeline making mitigation and response no longer nice to haves or optional. Nothing new here folks just shitty cybersecurity practices being called out.

  2. Nothing will change until there are consequences for an organization suffering a breach.

  3. The real, persistent use for AI is probably going to be in cybersecurity, to fight itself

  4. Ok_Passion295 on

    future of cybersecurity:
    hacker: “claude attack government”
    government: “claude stop hacker”
    repeat

  5. JoraStarkiller on

    This is the problem with not having ethical guardrails in place, the opportunities for exploitation are only limited by imagination

  6. I just want to produce some python code to start some calculations in analysis and do postprocessing afterwards with MATLAB but can’t get copilot to produce something useful

  7. VerdantPathfinder on

    Maybe we shouldn’t have fired all the cybersecurity people in the government …. just a thought.

  8. FloridaMMJInfo on

    So AI is a national security threat and should be made illegal to develop and own.

  9. Why are these guys always breaching government sites to steal shit, but never breaching credit reporting agencies, predatory loan companies, etc., and “fixing” some things? Come on, y’all can do it, and the world could use that right about now.

  10. Single-Use-Again on

    How are ppl doing this? Wouldn’t chat be like “Yeah we don’t do malicious things like that”.

  11. trilobyte-dev on

    There was a good talk last week at a conference by a CSO who laid out how open-weight LLMs are now good enough so that state-sponsored attackers are running OpenClaw and local LLMs like Deepseek to plan and execute (infiltration, data discovery, exfiltration) attacks entirely automated and without the risk of the attacks showing up in OpenAI or Claude logs that can be traced back to them.