Last September, while completing a grant application, I faltered at a section labelled ‘summary of progress’. This section, written in a narrative style, was meant to tell reviewers about who I was and why I should be funded. Among other things, it needed to outline any family leave I’d taken; to spell out why my budget was reasonable, given my past funding; and to include any broad ‘activities, contributions and impacts’ that would support the application.
How could I sensibly combine an acknowledgement of two maternity leaves with a description of my engagement with open science and discuss why I was worthy of the funding I’d requested? There was no indication of the criteria reviewers would use to evaluate what I wrote. I was at a loss.
Bring PhD assessment into the twenty-first century
When my application was rejected in January, the reviewers didn’t comment on my narrative summary. Yet they did mention my publication record, part of the conventional academic CV that I was also required to submit. So I’m still none the wiser as to how the summary was judged — or if it was considered at all.
As co-chair of the Declaration On Research Assessment (DORA) — a global initiative that aims to improve how research is evaluated — I firmly believe in using narrative reflections for job applications, promotions and funding. Narratives make space for broad research impacts, from diversity, equity and inclusion efforts to educational outreach, which are hard to include in typical CVs. But I hear stories like mine time and again. The academic community is attempting, in good faith, to move away from narrow assessment metrics such as publications in high-impact journals. But institutes are struggling to create workable narrative assessments, and researchers struggling to write them.
The problem arises because new research assessment systems are not being planned and implemented properly. This must change. Researchers need explicit evaluation criteria that help them to write narratives by spelling out how different aspects of the text will be weighted and judged.
Research communities must be involved in designing these criteria. All too often, researchers tell me about assessment systems being imposed from the top down, with no consultation. This risks these new systems being no better than those they are replacing.
How to boost your research: take a sabbatical in policy
Assessments should be mission-driven and open to change over time. For example, if an institute wants to increase awareness and implementation of open science, its assessments of which researchers should be promoted could reward those who have undertaken relevant training or implemented practices such as data sharing. As open science becomes more mainstream, assessments could reduce the weight given to such practices.
The value of different research outputs will vary between fields, institutes and countries. Funding bodies in Canada, where I work, might favour grants that prioritize Indigenous engagement and perspectives in research — a key focus of diversity, equity and inclusion efforts in the Canadian scientific community. But the same will not apply in all countries.
Organizations must understand that reform can’t be done well on the cheap. They should invest in implementation scientists, who are trained to investigate the factors that stop new initiatives succeeding and find ways to overcome them. These experts can help to get input from the research community, and to bring broad perspectives together into a coherent assessment framework.
Some might argue that it would be better for cash-strapped research organizations to rework existing assessments to suit their needs rather than spend money on experts to develop a new one. Yes, sharing resources and experiences is often useful. But because each research community is unique, copying a template is unlikely to produce a useful assessment. DORA is creating tools to help. One is Reformscape (see go.nature.com/4ab8aky) — an organized database of mini case studies that highlight progress in research reform, including policies and sample CVs that can be adapted for use in fresh settings. This will allow institutions to build on existing successes.
The postdoc experience is broken. Funders such as the NIH must help to reimagine it
Crucially, implementation scientists are also well placed to audit how a new system is doing, and to make iterative changes. No research evaluation system will work perfectly at first — organizations must commit sustained resources to monitoring and improving it.
The Luxembourg National Research Fund (FNR) shows the value of this iterative approach. In 2021, it began requesting a narrative CV for funding applications, rather than a CV made up of the usual list of affiliations and publications. Since then, it has been studying how well this system works. It has had mostly positive feedback, but researchers in some fields are less satisfied, and there is evidence that institutes aren’t providing all researchers with the guidance they need to complete the narrative CV. In response, the FNR is now investigating how to adapt the CV to better serve its communities.
Each institution has its own work to do, if academia is truly to reform research assessment. Those institutions that drag their feet are sending a message that they are prepared to continue supporting a flawed system that wastes research time and investment.
Competing Interests
K.C. is the co-chair of DORA (Declaration On Research Assessment) — this in an unpaid role.