OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot and other models are befuddling students, researchers and archivists by generating “incorrect or fabricated archival references,” according to the ICRC, which runs some of the world’s most used research archives. (Scientific American has asked the owners of those AI models to comment.)
AI models not only point some users to false sources but also cause problems for researchers and librarians, who end up wasting their time looking for requested nonexistent records, says Library of Virginia chief of researcher engagement Sarah Falls. Her library estimates that 15 percent of emailed reference questions it receives are now ChatGPT-generated, and some include hallucinated citations for both published works and unique primary source documents. “For our staff, it is much harder to prove that a unique record doesn’t exist,” she says.
NotObviouslyARobot on
They straight up hallucinate Digital Object Identifiers
GraciaEtScientia on
Now, hear me out:
Have they tried asking ChatGPT if the journals exist?
Silvershanks on
Uuugh, the AI satanic panic is exhausting. Can 2026 please not be like this? Can we stop talking about slop?
DandD_Gamers on
Shocker
The shitty chatbot cannot even be a good search engine.
lol
Gaiden206 on
The students should have “double checked it” like Gemini says at the bottom of every response. 😂
hangfromthisone on
It is amazing that people still don’t understand LLMs don’t have any real data. It just don’t work that way. It is not a database.
ToothyWeasel on
So we’ve dumped billions into something that works worse than a circa 2000s Google search engine while also using vastly more resources. Wonderful.
8 Comments
OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot and other models are befuddling students, researchers and archivists by generating “incorrect or fabricated archival references,” according to the ICRC, which runs some of the world’s most used research archives. (Scientific American has asked the owners of those AI models to comment.)
AI models not only point some users to false sources but also cause problems for researchers and librarians, who end up wasting their time looking for requested nonexistent records, says Library of Virginia chief of researcher engagement Sarah Falls. Her library estimates that 15 percent of emailed reference questions it receives are now ChatGPT-generated, and some include hallucinated citations for both published works and unique primary source documents. “For our staff, it is much harder to prove that a unique record doesn’t exist,” she says.
They straight up hallucinate Digital Object Identifiers
Now, hear me out:
Have they tried asking ChatGPT if the journals exist?
Uuugh, the AI satanic panic is exhausting. Can 2026 please not be like this? Can we stop talking about slop?
Shocker
The shitty chatbot cannot even be a good search engine.
lol
The students should have “double checked it” like Gemini says at the bottom of every response. 😂
It is amazing that people still don’t understand LLMs don’t have any real data. It just don’t work that way. It is not a database.
So we’ve dumped billions into something that works worse than a circa 2000s Google search engine while also using vastly more resources. Wonderful.