Mikołaj Piniewski is a researcher to whom PhD students and collaborators turn when they need to revise or refine a manuscript. The hydrologist, at the Warsaw University of Life Sciences, has a keen eye for problems in text — a skill that came in handy last year when he encountered some suspicious writing in peer-review reports of his own paper.
Last May, when Piniewski was reading the peer-review feedback that he and his co-authors had received for a manuscript they’d submitted to an environmental-science journal, alarm bells started ringing in his head. Comments by two of the three reviewers were vague and lacked substance, so Piniewski decided to run a Google search, looking at specific phrases and quotes the reviewers had used.
To his surprise, he found the comments were identical to those that were already available on the Internet, in multiple open-access review reports from publishers such as MDPI and PLOS. “I was speechless,” says Piniewski. The revelation caused him to go back to another manuscript that he had submitted a few months earlier, and dig out the peer-review reports he received for that. He found more plagiarized text. After e-mailing several collaborators, he assembled a team to dig deeper.
Meet this super-spotter of duplicated images in science papers
The team published the results of its investigation in Scientometrics in February1, examining dozens of cases of apparent plagiarism in peer-review reports, identifying the use of identical phrases across reports prepared for 19 journals. The team discovered exact quotes duplicated across 50 publications, saying that the findings are just “the tip of the iceberg” when it comes to misconduct in the peer-review system.
Dorothy Bishop, a former neuroscientist at the University of Oxford, UK, who has turned her attention to investigating research misconduct, was “favourably impressed” by the team’s analysis. “I felt the way they approached it was quite useful and might be a guide for other people trying to pin this stuff down,” she says.
Peer review under review
Piniewski and his colleagues conducted three analyses. First, they uploaded five peer-review reports from the two manuscripts that his laboratory had submitted to a rudimentary online plagiarism-detection tool. The reports had 44–100% similarity to previously published online content. Links were provided to the sources in which duplications were found.
The researchers drilled down further. They broke one of the suspicious peer-review reports down to fragments of one to three sentences each and searched for them on Google. In seconds, the search engine returned a number of hits: the exact phrases appeared in 22 open peer-review reports, published between 2021 and 2023.
The final analysis provided the most worrying results. They took a single quote — 43 words long and featuring multiple language errors, including incorrect capitalization — and pasted it into Google. The search revealed that the quote, or variants of it, had been used in 50 peer-review reports.
Predominantly, these reports were from journals published by MDPI, PLOS and Elsevier, and the team found that the amount of duplication increased year-on-year between 2021 and 2023. Whether this is because of an increase in the number of open-access peer-review reports during this time or an indication of a growing problem is unclear — but Piniewski thinks that it could be a little bit of both.
Why would a peer reviewer use plagiarized text in their report? The team says that some might be attempting to save time, whereas others could be motivated by a lack of confidence in their writing ability, for example, if they aren’t fluent in English.
The team notes that there are instances that might not represent misconduct. “A tolerable rephrasing of your own words from a different review? I think that’s fine,” says Piniewski. “But I imagine that most of these cases we found are actually something else.”
The source of the problem
Duplication and manipulation of peer-review reports is not a new phenomenon. “I think it’s now increasingly recognized that the manipulation of the peer-review process, which was recognized around 2010, was probably an indication of paper mills operating at that point,” says Jennifer Byrne, director of biobanking at New South Wales Health in Sydney, Australia, who also studies research integrity in scientific literature.
Paper mills — organizations that churn out fake research papers and sell authorships to turn a profit — have been known to tamper with reviews to push manuscripts through to publication, says Byrne.
The fight against fake-paper factories that churn out sham science
However, when Bishop looked at Piniewski’s case, she could not find any overt evidence of paper-mill activity. Rather, she suspects that journal editors might be involved in cases of peer-review-report duplication and suggests studying the track records of those who’ve allowed inadequate or plagiarized reports to proliferate.
Piniewski’s team is also concerned about the rise of duplications as generative artificial intelligence (AI) becomes easier to access. Although his team didn’t look for signs of AI use, its ability to quickly ingest and rephrase large swathes of text is seen as an emerging issue.
A preprint posted in March2 showed evidence of researchers using AI chatbots to assist with peer review, identifying specific adjectives that could be hallmarks of AI-written text in peer-review reports.
Bishop isn’t as concerned as Piniewski about AI-generated reports, saying that it’s easy to distinguish between AI-generated text and legitimate reviewer commentary. “The beautiful thing about peer review,” she says, is that it is “one thing you couldn’t do a credible job with AI”.
Preventing plagiarism
Publishers seem to be taking action. Bethany Baker, a media-relations manager at PLOS, who is based in Cambridge, UK, told Nature Index that the PLOS Publication Ethics team “is investigating the concerns raised in the Scientometrics article about potential plagiarism in peer reviews”.
How big is science’s fake-paper problem?
An Elsevier representative told Nature Index that the publisher “can confirm that this matter has been brought to our attention and we are conducting an investigation”.
In a statement, the MDPI Research Integrity and Publication Ethics Team said that it has been made aware of potential misconduct by reviewers in its journals and is “actively addressing and investigating this issue”. It did not confirm whether this was related to the Scientometrics article.
One proposed solution to the problem is ensuring that all submitted reviews are checked using plagiarism-detection software. In 2022, exploratory work by Adam Day, a data scientist at Sage Publications, based in Thousand Oaks, California, identified duplicated text in peer-review reports that might be suggestive of paper-mill activity. Day offered a similar solution of using anti-plagiarism software, such as Turnitin.
Piniewski expects the problem to get worse in the coming years, but he hasn’t received any unusual peer-review reports since those that originally sparked his research. Still, he says that he’s now even more vigilant. “If something unusual occurs, I will spot it.”