Researchers poison stolen data to make AI systems return wrong results

https://www.theregister.com/2026/01/06/ai_data_pollution_defense/

9 Comments

  1. FerrusManlyManus on

    Glad there is a picture of poison there, marked poison, that has nothing to do with how they poisoned the data lol.

  2. But they’ll return the wrong data even if you don’t do this. That’s sorta one of the major problems…..

  3. The headline may be misleading: the tech involved poisons the knowledge-graph representations from already-scraped data to poison other AI systems using the same knowledge graph unless they also have the secret key for the knowledge graph. It’s not at all useful for the average person.

  4. “The first shots fired in the data wars were mighty confusing to the average person.”

    -Mark Twain

  5. averagebear_003 on

    >The threat model here assumes that the attacker has been able to steal a KG outright but hasn’t obtained the secret key.

    This sounds like an unrealistic threat model at first glance. Isn’t it harder to steal a knowledge graph than a secret key?

  6. AlanShore60607 on

    Wow. We’re really that certain that AI won’t become sentient and ***remember who poisoned it!?!***