Could the replication crisis in scientific literature be addressed by having scientists independently attempt to reproduce their peers’ key experiments during the publication process? And would teams be incentivized to do so by having the opportunity to report their findings in a citable paper, to be published alongside the original study?
These are questions being asked by two researchers who say that a formal peer-replication model could greatly benefit the scientific community.
Anders Rehfeld, a researcher in human sperm physiology at Copenhagen University Hospital, began considering alternatives to standard peer review after encountering a published study that could not be replicated in his laboratory. Rehfeld’s experiments1 revealed that the original paper was flawed, but he found it very difficult to publish the findings and correct the scientific record.
“I sent my data to the original journal, and they didn’t care at all,” Rehfeld says. “It was very hard to get it published somewhere where you thought the reader of the original paper would find it.”
The issues that Rehfeld encountered could have been avoided if the original work had been replicated by others before publication, he argues. “If a reviewer had tried one simple experiment in their own lab, they could have seen that the core hypothesis of the paper was wrong.”
Rehfeld collaborated with Samuel Lord, a fluorescence-microscopy specialist at the University of California, San Francisco, to devise a new peer-replication model.
The replication crisis won’t be solved with broad brushstrokes
In a white paper detailing the process2, Rehfeld, Lord and their colleagues describe how journal editors could invite peers to attempt to replicate select experiments of submitted or accepted papers by authors who have opted in. In the field of cell biology, for example, that might involve replicating a western blot, a technique used to detect proteins, or an RNA-interference experiment that tests the function of a certain gene. “Things that would take days or weeks, but not months, to do” would be replicated, Lord says.
The model is designed to incentivize all parties to participate. Peer replicators — unlike peer reviewers — would gain a citable publication, and the authors of the original paper would benefit from having their findings confirmed. Early-career faculty members at mainly undergraduate universities could be a good source of replicators: in addition to gaining citable replication reports to list on their CVs, they would get experience in performing new techniques in consultation with the original research team.
Rehfeld and Lord are discussing their idea with potential funders and journal editors, with the goal of running a pilot programme this year.
“I think most scientists would agree that some sort of certification process to indicate that a paper’s results are reproducible would benefit the scientific literature,” says Eric Sawey, executive editor of the journal Life Science Alliance, who plans to bring the idea to the publisher of his journal. “I think it would be a good look for any journal that would participate.”
Who pays?
Sawey says there are two key questions about the peer-replication model: who will pay for it, and who will find the labs to do the reproducibility tests? “It’s hard enough to find referees for peer review, so I can’t imagine cold e-mailing people, asking them to repeat the paper,” he says. Independent peer-review organizations, such as ASAPbio and Review Commons, might curate a list of interested labs, and could even decide which experiments will be replicated.
Lord says that having a third party organize the replication efforts would be great, and adds that funding “is a huge challenge”. According to the model, funding agencies and research foundations would ideally establish a new category of small grants devoted to peer replication. “It could also be covered by scientific societies, or publication fees,” Rehfeld says.
A controlled trial for reproducibility
It’s also important for journals to consider what happens when findings can’t be replicated. “If authors opt in, you’d like to think they’re quite confident that the work is reproducible,” says Sawey. “Ideally, what would come out of the process is an improved methods or protocols section, which ultimately allows the replicating lab to reproduce the work.”
Most important, says Rehfeld, is ensuring that the peer-replication reports are published, irrespective of the outcome. If replication fails, then the journal and original authors would choose what to do with the paper. If an editor were to decide that the original manuscript was seriously undermined, for example, they could stop it from being published, or retract it. Alternatively, they could publish the two reports together, and leave the readers to judge. “I could imagine peer replication not necessarily as an additional ‘gatekeeper’ used to reject manuscripts, but as additional context for readers alongside the original paper,” says Lord.
A difficult but worthwhile pursuit
Attempting to replicate others’ work can be a challenging, contentious undertaking, says Rick Danheiser, editor-in-chief of Organic Syntheses, an open-access chemistry journal in which all papers are checked for replicability by a member of the editorial board before publication. Even for research from a well-resourced, highly esteemed lab, serious problems can be uncovered during reproducibility checks, Danheiser says.
Replicability in a field such as synthetic organic chemistry — in which the identity and purity of every component in a reaction flask should already be known — is already challenging enough, so the variables at play in some areas of biology and other fields could pose a whole new level of difficulty, says Richard Sever, assistant director of Cold Spring Harbor Laboratory Press in New York, and co-founder of the bioRxiv and medRxiv preprint servers. “But just because it’s hard, doesn’t mean there might not be cases where peer replication would be helpful.”
How to make your research reproducible
The growing use of preprints, which decouple research dissemination from evaluation, allows some freedom to rethink peer evaluation, Sever adds. “I don’t think it could be universal, but the idea of replication being a formal part of evaluating at least some work seems like a good idea to me.”
An experiment to test a different peer-replication model in the social sciences is currently under way, says Anna Dreber Almenberg, who studies behavioural and experimental economics at the Stockholm School of Economics. Dreber is a board member of the Institute for Replication (I4R), an organization led by Abel Brodeur at University of Ottawa, which works to systematically reproduce and replicate research findings published in leading journals. In January, I4R entered an ongoing partnership with Nature Human Behaviour to attempt computational reproduction of data and findings of as many studies published from 2023 onwards as possible. Replication attempts from the first 18 months of the project will be gathered into a ‘meta-paper’ that will go through peer review and be considered for publication in the journal.
“It’s exciting to see how people from completely different research fields are working on related things, testing different policies to find out what works,” says Dreber. “That’s how I think we will solve this problem.”