Is AI ready to mass-produce lay summaries of research articles?


Generative AI might be a powerful tool in making research more accessible for scientists and the broader public alike.Credit: Getty

Thinking back to the early days of her PhD programme, Esther Osarfo-Mensah recalls struggling to keep up with the literature. “Sometimes, the wording or the way the information is presented actually makes it quite a task to get through a paper,” says the biophysicist at University College London. Lay summaries could be a time-saving solution. Short synopses of research articles written in plain language could help readers to decide which papers to focus on -— but they aren’t common in scientific publishing. Now, the buzz around artificial intelligence (AI) has pushed software engineers to develop platforms that can mass produce these synopses.

Scientists are drawn to AI tools because they excel at crafting text in accessible language, and they might even produce clearer lay summaries than those written by people. A study1 released last year looked at lay summaries published in one journal and found that those created by people were less readable than were the original abstracts -— potentially because some researchers struggle to replace jargon with plain language or to decide which facts to include when condensing the information into a few lines.

AI lay-summary platforms come in a variety of forms (see ‘AI lay-summary tools’). Some allow researchers to import a paper and generate a summary; others are built into web servers, such as the bioRxiv preprint database.

AI lay-summary tools

Several AI resources have been developed to help readers glean information about research articles quickly. They offer different perks. Here are a few examples and how they work:

– SciSummary: This tool parses the sections of a paper to extract the key points and then runs those through the general-purpose large language model GPT-3.5 to transform them into a short summary written in plain language. Max Heckel, the tool’s founder, says it incorporates multimedia into the summary, too: “If it determines that a particular section of the summary is relevant to a figure or table, it will actually show that table or figure in line.”

– Scholarcy: This technology takes a different approach. Its founder, Phil Gooch, based in London, says the tool was trained on 25,000 papers to identify sentences containing verb phrases such as “has been shown to” that often carry key information about the study. It then uses a mixture of custom and open-source large language models to paraphrase those sentences in plain text. “You can actually create ten different types of summaries,” he adds, including one that lays out how the paper is related to previous publications.

See also  DJI Avata 2 drone gets likely launch date with official 'ready to roll' teaser

– SciSpace: This tool was trained on a repository of more than 280 million data sets, including papers that people had manually annotated, to extract key information from articles. It uses a mixture of proprietary fine-tuned models and GPT-3.5 to craft the summary, says the company’s chief executive, Saikiran Chandha, based in San Francisco, California. “A user can ask questions on top of these summaries to further dig into the paper,” he notes, adding that the company plans to develop audio summaries that people can tune into on the go.

Benefits and drawbacks

Mass-produced lay summaries could yield a trove of benefits. Beyond helping scientists to speed-read the literature, the synopses can be disseminated to people with different levels of expertise, including members of the public. Osarfo-Mensah adds that AI summaries might also aid people who struggle with English. “Some people hide behind jargon because they don’t necessarily feel comfortable trying to explain it,” she says, but AI could help them to rework technical phrases. Max Heckel is the founder of SciSummary, a company in Columbus, Ohio, that offers a tool that allows users to import a paper to be summarized. The tool can also translate summaries into other languages, and is gaining popularity in Indonesia and Turkey, he says, arguing that it could topple language barriers and make science more accessible.

Despite these strides, some scientists feel that improvements are needed before we can rely on AI to describe studies accurately.

Will Ratcliff, an evolutionary biologist at the Georgia Institute of Technology in Atlanta, argues that no tool can produce better text than can professional writers. Although researchers have different writing abilities, he invariably prefers reading scientific material produced by study authors over those generated by AI. “I like to see what the authors wrote. They put craft into it, and I find their abstract to be more informative,” he says.

See also  Palabra del día: respuestas y sugerencias para el 11 de noviembre

Nana Mensah, a PhD student in computational biology at the Francis Crick Institute in London, adds that, unlike AI, people tend to craft a narrative when writing lay summaries, helping readers to understand the motivations behind each step of the study. He says, however, that one advantage of AI platforms is that they can write summaries at different reading levels, potentially broadening the audience. In his experience, however, these synopses might still include jargon that can confuse readers without specialist knowledge.

AI tools might even struggle to turn technical language into lay versions at all. Osarfo-Mensah works in biophysics, a field with many intricate parameters and equations. She found that an AI summary of one of her research articles excluded information from a whole section. If researchers were looking for a paper with those details and consulted the AI summary, they might abandon her paper and look for other work.

Andy Shepherd, scientific director at global technology company Envision Pharma Group in Horsham, UK, has in his spare time compared the performances of several AI tools to see how often they introduce blunders. He used eight text generators, including general ones and some that had been optimized to produce lay summaries. He then asked people with different backgrounds, such as health-care professionals and the public, to assess how clear, readable and useful lay summaries were for two papers.

“All of the platforms produced something that was coherent and read like a reasonable study, but a few of them introduced errors, and two of them actively reversed the conclusion of the paper,” he says. It’s easy for AI tools to make this mistake by, for instance, omitting the word ‘not’ in a sentence, he explains. Ratcliff cautions that AI summaries should be viewed as a tool’s “best guess” of what a paper is about, stressing that it can’t check facts.

Broader readership

The risk of AI summaries introducing errors is one concern among many. Another is that one benefit of such summaries — that they can help to share research more widely among the public — could also have drawbacks. The AI summaries posted alongside bioRxiv preprints, research articles that have yet to undergo peer review, are tailored to different levels of reader expertise, including that of the public. Osarfo-Mensah supports the effort to widen the reach of these works. “The public should feel more involved in science and feel like they have a stake in it, because at the end of the day, science isn’t done in a vacuum,” she says.

See also  A spotlight on the stark imbalances of global health research

But others point out that this comes with the risk of making unreviewed and inaccurate research more accessible. Mensah says that academics “will be able to treat the article with the sort of caution that’s required”, but he isn’t sure that members of the public will always understand when a summary refers to unreviewed work. Lay summaries of preprints should come with a “hazard warning” informing the reader upfront that the material has yet to be reviewed, says Shepherd.

“We agree entirely that preprints must be understood as not peer-reviewed when posted,” says John Inglis, co-founder of bioRxiv, who is based at Cold Spring Harbor Laboratory in New York. He notes that such a disclaimer can be found on the homepage of each preprint, and if a member of the public navigates to a preprint through a web search, they are first directed to the homepage displaying this disclaimer before they can access the summary. But the warning labels are not integrated into the summaries, so there is a risk that these could be shared on social media without the disclaimer. Inglis says bioRxiv is working with its partner ScienceCast, whose technology produces the synopses, on adding a note to each summary to negate this risk.

As is the case for many other nascent generative-AI technologies, humans are still working out the messaging that might be needed to ensure users are given adequate context. But if AI lay-summary tools can successfully mitigate these and other challenges, they might become a staple of scientific publishing.



Source Article Link

Leave a Comment