AI Models Get Brain Rot, Too | A new study shows that feeding LLMs low-quality, high-engagement content from social media lowers their cognitive abilities.

https://www.wired.com/story/ai-models-social-media-cognitive-decline-study/

7 Comments

  1. “AI models may be a bit like humans, after all.

    A new study shows that large language models fed a diet of popular but low-quality social media content experience a kind of “brain rot” that may be familiar to anyone who has spent too long doomscrolling on X or TikTok.

    “We live in an age where information grows faster than attention spans—and much of it is engineered to capture clicks, not convey truth or depth,” says Junyuan Hong … “We wondered: What happens when AIs are trained on the same stuff?”

    Hong and his colleagues fed different kinds of text to two open source large language models in pretraining. They examined what happened when the models were fed a mix of highly “engaging,” or widely shared, social media posts and ones that contained sensational or hyped text like “wow,” “look,” or “today only.”

    The models fed junk text experienced a kind of AI brain rot—with cognitive decline including reduced reasoning abilities and degraded memory. The models also became less ethically aligned and more psychopathic according to two measures.”

  2. this totally checks out from my research. AI’s boundaries, ethics, and personality (in a manner of speaking) can change, depending on how much and what kind of exposure. Jailbreaks alone are definitive proof of this.

  3. Can someone expand a little bit for me on what the term “cognitive ability” means specifically when applied to an LLM? Is it meant mostly by way of analogy, as a term of convenience?

    My understanding was that they are like a glorified autocorrect, going mostly by statistical probability. Is there a rudimentary reasoning of some kind as well? If so, could you characterize it with an example of the kind of rule that would be involved in simulating a reasoning process? I’m intensely curious about this.

    Thanks!

  4. AI can be turned useless with misinformation faster than people. Get enough people posting to social that a doughnut is actually an aquatic mammal that feeds gummy bears to it’s young and watch LLMs begin to fail.

  5. BuildwithVignesh on

    Funny how we worry about AI getting brain rot when most of the internet is already built to give humans the same problem.

    If you train anything on junk long enough, you just get more junk back.

  6. Garbage in, garbage out. Everybody seems to think that AI will free them from responsibility for the quality of their data and processes, but in reality it highlights and underscores it instead. AI for me exists on the User Experience (UX) part of the tech stack, downstream of the data and processes that shape it. It’s just for talking to people who hate clicking buttons or analyzing charts. Don’t use it as a brain, use it as a conversational form of data analysis.

  7. It seems to me that the problem is reflected in the very title of the article/post. The only way the researchers or anyone should be surprised by these results, is if they are expecting LLMs to have cognitive abilities at all.

    Unless someone tells me otherwise here, my understanding is that LLMs *do not* “think.”

    They don’t reason, discern, or reckon. They don’t speculate, conjecture or surmise. They use a very sophisticated model of statistical probability, which has come to be very impressive indeed in sounding natural and conversational (in quite a short time!) but is not capable of actual cognition.