AI Slop Is Spurring Record Requests for Imaginary Journals

https://www.scientificamerican.com/article/ai-slop-is-spurring-record-requests-for-imaginary-journals/

8 Comments

  1. OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot and other models are befuddling students, researchers and archivists by generating “incorrect or fabricated archival references,” according to the ICRC, which runs some of the world’s most used research archives. (Scientific American has asked the owners of those AI models to comment.)

    AI models not only point some users to false sources but also cause problems for researchers and librarians, who end up wasting their time looking for requested nonexistent records, says Library of Virginia chief of researcher engagement Sarah Falls. Her library estimates that 15 percent of emailed reference questions it receives are now ChatGPT-generated, and some include hallucinated citations for both published works and unique primary source documents. “For our staff, it is much harder to prove that a unique record doesn’t exist,” she says.

  2. Uuugh, the AI satanic panic is exhausting. Can 2026 please not be like this? Can we stop talking about slop?

  3. The students should have “double checked it” like Gemini says at the bottom of every response. 😂

  4. hangfromthisone on

    It is amazing that people still don’t understand LLMs don’t have any real data. It just don’t work that way. It is not a database.

  5. So we’ve dumped billions into something that works worse than a circa 2000s Google search engine while also using vastly more resources. Wonderful.