A free online tool released earlier this month alerts researchers when a paper cites studies that are mentioned on the website PubPeer, a forum scientists often use to raise integrity concerns surrounding published papers.
Studies are usually flagged on PubPeer when readers have suspicions, for example about image manipulation, plagiarism, data fabrication or artificial intelligence (AI)-generated text. PubPeer already offers its own browser plug-in that alerts users when a study that they are reading has been posted on the site. The new tool, a plug-in released on 13 April by RedacTek, based in Oakland, California, goes further — it searches through reference lists for papers that have been flagged. The software pulls information from many sources, including PubPeer’s database; data from the digital-infrastructure organization Crossref, which assigns digital object identifiers to articles; and OpenAlex, a free index of hundreds of millions of scientific documents.
It’s important to track mentions of referenced articles on PubPeer, says Jodi Schneider, an information scientist at the University of Illinois Urbana-Champaign, who has tried out the RedacTek plug-in. “Not every single reference that’s in the bibliography matters, but some of them do,” she adds. “When you see a large number of problems in somebody’s bibliography, that just calls everything into question.”
The aim of the tool is to flag potential problems with studies to researchers early on, to reduce the circulation of poor-quality science, says RedacTek founder Rick Meyler, based in Emeryville, California. Future versions might also use AI to automatically clarify whether the PubPeer comments on a paper are positive or negative, he adds.
Third-generation retractions
As well as flagging PubPeer discussions, the plug-in indicates when a study, or the papers that it cites, has been retracted. There are existing tools that alert academics about retracted citations; some can do this during the writing process, so that researchers are aware of the publication status of studies when constructing bibliographies. But with the new tool, users can opt in to receive notifications about further ‘generations’ of retractions — alerts cover not only the study that they are reading, but also the papers it cites, articles cited by those references and even papers cited by the secondary references.
The software also calculates a ‘retraction association value’ for studies, a metric that measures the extent to which the paper is associated with science that has been withdrawn from the literature. As well as informing individual researchers, the plug-in could help scholarly publishers to keep tabs on their own journals, Meyler says, because it allows users to filter by publication.
In its ‘paper scorecard’, the tool also flags any papers in the three generations of referenced studies in which more than 25% of papers in the bibliography are self-citations — references by authors to their previous works.
Future versions could highlight whether papers cited retracted studies before or after the retraction was issued, notes Meyler, or whether mentions of such studies acknowledge the retraction. That would be useful, says Schneider, who co-authored a 2020 analysis that found that as little as 4% of citations to retracted studies note that the referenced paper has been retracted1.
Meyler says that RedacTek is currently in talks with scholarly-services firm Cabell’s International in Beaumont, Texas, which maintains pay-to-view lists of suspected predatory journals, which publish articles without proper quality checks for issues such as plagiarism but still collect authors’ fees. The plan is to use these lists to improve the tool so that it can also automatically flag any cited papers that are published in such journals.