What would you think if you heard that a Nobel Prize winner at a top US university had to retract 15 of his scientific papers? Or that a study of Alzheimer’s disease cited more than 2,000 times — an important line of inquiry upon which further research and even clinical trials were based — had been retracted because one of the authors manipulated key images?
Maybe you saw reports just last week that a widely covered study on the contamination of people’s bodies by microplastics has come under serious question. Or that the esteemed Dana-Farber Cancer Institute settled a lawsuit to the tune of $15 million (£11.2 million) alleging researchers there falsified data.
All around us, we’re hearing of cases where scientific findings are collapsing, not because of new research overturning old ideas, but because investigations into accuracy or truthfulness have turned up red flags. Traditionally, science relied on pre-publication peer review by two or three experts per submitted manuscript. They were supposed to identify early problems such as data that don’t add up, or incorrect conclusions.
But peer review has never been as good a filter as many would like to think. It relies on volunteer labour by people who are incentivised by the academic reward system to write papers, not review them. Journal editors seek appropriate reviewers, but inevitably some don’t have the right expertise, or the time, to dig into details and source material. And AI has made it easy to create and submit slop that is somewhere between made up and useless, and some researchers — in search of jobs, promotions, and raises — use it as a shortcut.
This can have real-world consequences. In the case of the Alzheimer’s research paper published in Nature in 2006 and later retracted, funding for work related to the protein at its centre rose sharply, as did research by other scientists who cited the paper in support of their own experiments, without knowing it involved manipulated data. The paper formed part of the basis of clinical trials into a new drug which ultimately failed (and cost billions).
• How medical fraud and arrogance failed Alzheimer’s sufferers
So, how bad is the whole problem now? Much worse, it turns out, than when Retraction Watch was founded in 2010. Those of us who give our time to the website track retracted papers as a way of helping to increase transparency in the scientific process. The idea came from co-founder and science reporter Adam Marcus’s discovery two years earlier that an anaesthetist and pain management researcher in America had faked data in clinical trials. Scott Reuben eventually went to prison for charges related to scientific misconduct.
Retraction Watch co-founder Adam Marcus
When a publisher retracts a paper, it’s saying the contents are unreliable. But like a tree falling in the forest, back in 2010, most retractions weren’t making a meaningful sound. So we started paying attention and broadcasting what we saw. And very quickly we realised we couldn’t keep up; there were dozens a month. Now that figure has grown to nearly 500 a month, with about 63,000 retractions logged in our database. The database project employs three people full-time to publicly catalogue what’s happening.
The Dana-Farber case, unearthed by whistleblower Sholto David, exemplifies a key change behind the massive rise in retractions. Sleuths such as David — typically volunteers — function as true heroes of modern science, spending days and nights detecting plagiarism as well as suspicious data, statistics and more. Looking at studies by Dana-Farber researchers, David found that images of mice, said to have been taken at different stages of an experiment, appeared to be identical, and identified bone marrow samples taken from humans that were presented in a misleading way. This kind of painstaking work has only become possible on any sort of scale thanks to the development of forensic tools, some powered by AI.
Sholto David
JOHN NACION/VARIETY/GETTY IMAGES
Those working on the issue come together online to share their methods and insights. The collaborative work is paying off systemically. For years, most publishers (and scientists) publicly denied there was anything worrisome going on with peer review, but sleuths and media scrutiny have prompted a reckoning. All the large publishing houses now employ research integrity teams to review allegations and retract papers if necessary.
But this is a Sisyphean task. As Retraction Watch has documented, paper mills — shady organisations selling scholarly manuscripts and authorship to researchers who want to get ahead — are rapidly proliferating, overwhelming a system that has never had enough peer reviewers to ensure that everything that is published is reliable. One publisher, Hindawi — acquired by Wiley in 2021 — recently had to retract 13,000 papers which turned out to come from paper mills.
We have even documented how bad actors bribe journal editors. In June 2023, the detective work of Nicholas Wise, a fluid dynamics researcher at Cambridge University, uncovered a Chinese firm offering journal editors large sums of money — more than $20,000 (£15,000) — to accept papers for publication.
• Record year for bad science with 10,000 studies retracted
While careful work and cautious conclusions get ignored, researchers are often rewarded — with grants and professorships — for publishing scary findings that make good headlines. And once that happens, and advocacy groups, politicians and others also jump on findings, it becomes nearly impossible to walk the claims back.
The most well-known example is the 1998 Lancet paper co-authored by Andrew Wakefield, widely blamed for launching the modern anti-vaccine movement. That false report was finally retracted in 2010, but Wakefield is now a folk hero — including to people such as Robert F Kennedy Jr, secretary of health and human services in the US. The retraction is used by many people not as the basis for understanding Wakefield was wrong, but as evidence pharmaceutical companies decide what the medical literature says.
Andrew Wakefield after hearings at the General Medical Council in 2010, when he was struck off for ‘serious professional misconduct’
SHAUN CURRY/AFP/GETTY IMAGES
Scientists of all stripes have been forced into retractions. Gregg Semenza shared the 2019 Nobel prize in physiology or medicine for “discoveries of how cells sense and adapt to oxygen availability”. One anonymous researcher discovered duplicated and manipulated images in Semenza’s work in 2019.
Nobel-prize winning medical scientist Gregg Semenza has had to retract 15 papers
JOHN STROHSACKER/GETTY IMAGES
Sometimes retractions happen not because researchers have committed misconduct, but because they recognise, after criticism, that they got something wrong and need to pull back on claims. This happened recently in the case of a paper in Nature that included inflated estimates of the economic impact of climate change.
Behind it all lies an uncomfortable truth, one that is too easily misunderstood: science involves getting things wrong. When coupled with knowledge of scientific fraud — and realisations that following the wrong path may have wasted valuable resources — that insight could lead one to a kind of despairing nihilism. But it shouldn’t. We need to remember, even as we work to make these systems stronger, that science’s fallibility is part of its strength.
Rather than giving up, we should pay more attention to how we create perverse incentives — promoting quantity of publication over quality, and sexiness over meticulousness. Perhaps most importantly, we need to help the world understand that, when splashy results turn out to be incorrect and are retracted or amended, that’s all part of how we get closer to the truth.
Ivan Oransky is co-founder of Retraction Watch and executive director of its parent nonprofit, the Center for Scientific Integrity, where Alice Dreger is editor for the Medical Evidence Project




