Researchers Find Elon Musk’s New Grok AI Is Extremely Vulnerable to Hacking – “Seems like all these new models are racing for speed over security, and it shows.”

https://futurism.com/elon-musk-new-grok-ai-vulnerable-jailbreak-hacking

Share.

10 Comments

  1. Submission statement: “Researchers at the AI security company Adversa AI have found that Grok 3, the latest model released by Elon Musk’s startup xAI this week, is a cybersecurity disaster waiting to happen.

    The team [found that the model](https://adversa.ai/blog/grok-3-jailbreak-and-ai-red-teaming/) is extremely vulnerable to “simple jailbreaks,” which could be used by bad actors to “reveal how to seduce kids, dispose of bodies, extract DMT, and, of course, build a bomb,” according to Adversa CEO and cofounder Alex Polyakov.

    And it only gets worse from there.”

    The largest risks from AI come from lack of ability to control advanced AIs, but another source of risk is misuse. Given the rate of progress in AI abilities, how should AI labs deal with the fact that we currently can’t make un-jailbreakable models?

  2. The tech bros so often rush things out. You’re only a beta tester if you use this drivel. Shame them and demand better.

  3. Pretty sure he has said this is intentional, and even demonstrated how you can get it to e vulgar on a podcast.

  4. Top-Salamander-2525 on

    All of these AI models can be coaxed into giving sketchy advice unless that advice and material related to it is completely scrubbed from their datasets.

    You could safeguard an API with an extra nanny model that prevents the model from returning dangerous responses.

    You can’t completely safeguard the models themselves. And even if you could, people would just retrain them to break it.

  5. This is extremely concerning, especially in light of the fact that I’m convinced he’s installing his AI into every government system he compromises