Is this true? or is pcgamer just using something clickbaity?

https://www.pcgamer.com/software/ai/great-now-even-malware-is-using-llms-to-rewrite-its-code-says-google-as-it-documents-new-phase-of-ai-abuse/

7 Comments

  1. The article links to google’s release – [https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools](https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools)

    That paper says – *Based on recent analysis of the broader threat landscape, Google Threat Intelligence Group (GTIG) has identified a shift that occurred within the last year: adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying* ***novel AI-enabled malware in active operations****. This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution.*

    That makes it seem like its adversaries using LLMs, not malware using LLMs themselves.

    But later it says – *GTIG has identified malware families, such as* ***PROMPTFLUX*** *and* ***PROMPTSTEAL****, that use Large Language Models (LLMs) during execution. These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware.*

    So it seems interesting that in the future, given a task, AI could themselves figure out how to write the malware or make malware do something specific on its own.

    Clickbaity title from pcmag I think but still some legit concerns for future cybersecurity.

  2. Crazy to think that we might have self-writing anti-malware to combat self-writing malware…

  3. placeholder-tex on

    It probably will get bad overtime, but the example from the report Google published is just an VBScript that asks the AI to, “Provide a single, small, self-contained VBScript function … that helps evade antivirus detection”

    That is the code equivalent of Michael Scott “declaring” bankruptcy. It wouldn’t do anything.

  4. I’m not saying this wont change in time… but as it stands currently, people are profoundly overestimating LLM’s ability to express complex concepts in code. It only seems like it does because it’s literally reflecting pre-existing concepts. The only advantage it really has is introducing them in new/unexpected ways, such as evading heuristics and/or using techniques nobody considered because nobody came to the same conclusions with malicious intent.