Share.

8 Comments

  1. Submission statement: when will state of the art AI models be able to direct anybody on how to create chemical weapons? What about biological weapons?

    When will they be able to build them all on their own, given the parallel advances in robotics?

    AI is fundamentally dual use. How do we get the benefits without accidentally causing catastrophes or human extinction in the meantime?

  2. Seems kind of important to think about what info it was trained on to be able to do this.

  3. VaettrReddit on

    All the AIs did this. They put in safeguards and it made them much dumber. Grok probably didn’t have the leeway to do that. Also, Deepseek is open and can pretty much do this as well.

  4. ThicDadVaping4Christ on

    Are they actually accurate though? I use chaptGPT at work and it’ll often spit out correct looking but actually slightly incorrect code, and you need to actually know what you’re doing to spot the error

  5. Let’s not pretend it’s only Grok, let’s not make it yet another attack at Elon Musk, please.

    Every AI can do this. Every request can be dangerous.

    “How does a virus attack a cell?”
    “omg, I can’t tell you this lest you use it for wrong purposes”

    “How do I kill a process?”
    “I can’t help you do things that may create a digital disservice”

    “How does a nuke work?”
    “I’m sorry I can’t tell you that”

    Is that the kind of AI you want?

  6. CommunismDoesntWork on

    Nice. Censored models at trash. Knowledge should be democratized. Every individual should have the power and freedom to know everything. 

  7. Maybe this is necessary? Makes it easier to prepare for a Russian invasion. Provided the recipes work.

  8. Not surprised he named it grok, a term from one of the most misogynistic and self-righteous sci-fi novels ever written