Categories
Life Style

Larger or longer grants unlikely to push senior scientists towards high-risk, high-reward work

[ad_1]

An analog clock and a ball of US paper currency balanced on a seesaw weight scale.

The duration and value of a grant are not likely to alter the research strategies of recipients in the United States.Credit: DigitalVision/Getty

Offering professors more money or time isn’t likely to dramatically change how they do their research, a survey of US-based academics has found.

The survey, described in a preprint article posted on arXiv in December1, was completed by 4,175 professors across several disciplines, including the natural sciences, social sciences, engineering, mathematics and humanities.

The study’s authors, Kyle Myers and Wei Yang Tham, both economists at Harvard Business School in Boston, Massachusetts, say the aim was to investigate whether senior scientists would conduct their research differently if they had more money but less time, or vice versa.

The research comes amid interest from some funders in tweaking the amount of time and money awarded to scientists to incentivize them to do more socially valuable work. For instance, in 2017, the Howard Hughes Medical Institute in Chevy Chase, Maryland, announced that it had extended its grants from five to seven years, arguing that the extra time would allow researchers to “take more risk and achieve more transformative advances”.

Acknowledging that the most reliable way to test how grant characteristics might affect researchers’ work is to award them actual grants — which was not feasible — Myers and Tham instead presented them with hypothetical scenarios.

The survey respondents were asked what research strategies they would pursue if they were offered a certain sum of grant money for a fixed time period. Both the value and duration were randomly assigned. The hypothetical grants were worth US$100,000 to $2 million and ran between two and ten years.

To capture the changes in strategy, the survey provided the participants with five options that they could take if they successfully obtained the hypothetical grant. These included pursuing riskier projects — for example, those with only a small chance of success – or ones that were unrelated to their current work and increasing the speed or size of their ongoing projects.

The survey revealed that longer grants increased the researchers’ willingness to pursue riskier projects — but this held true only for tenured professors, who can afford to take a gamble because they tend to have long-term job security, an established reputation and access to more resources. The authors note, however, that any change in research strategy that resulted from receiving a longer grant was not substantial.

Non-tenured professors were not swayed towards risk-taking when they received longer grants. This finding suggests that longer grant designs don’t take into account the pressures that come with shorter employment contracts, says Myers. “If you’re a professor who’s on a 1- or 2-year contract, where you have to get renewed every year, then the difference between a 5-year or 10-year grant is not as important as performing in the next year or two,” he says.

Both tenured and non-tenured professors said longer, larger grants would slow down how fast they worked, “which suggests a significant amount of racing in science is in pursuit of resources”, the authors say, adding that this effect was also minor.

Myers and Tham report that the professors were “very unwilling” to reduce the amount of grant funding in exchange for a longer duration. “Money is much more valuable than time,” they conclude. They found that the professors valued a 1% increase in grant money nearly four times more than a 1% increase in grant duration. The study concludes that the researchers didn’t seem a to view the length of a single grant as “an important constraint on their research pursuits given their preferences, incentives and expected access to future funding sources”.

Experimenting with grant structures

Carl Bergstrom, a biologist at the University of Washington in Seattle who has studied science-funding models, says it’s interesting that substantial changes in grant structure generally yielded little to no change in the researchers’ hypothetical behaviour. “I just don’t know what to make of that,” he says, noting that it’s unclear whether this finding is a result of the study design, or is saying something about scientists’ attitude towards change. “One consistent explanation of all of this would be that fairly reasonable changes in the structure of one particular individual grant don’t do enough to change the overall incentive structure that scientists face for them to alter their behaviour.”

Bergstrom adds that modifying grant structures can still be a valuable exercise that could result in different kinds of candidate applying for and securing funding, which in turn might affect the kind of research that is produced. Myers and Tham didn’t examine whether modifying grant structures would affect the diversity of the pool of candidates, but they have investigated the nuances of risk-taking in research in another study, also posted as a preprint in December2. Researchers were surveyed about their appetite for risky science and how it affected their approach to grants. The survey found a strong link between the perceived risk of research and the amount of time spent applying for grants.

To get a clearer understanding of whether the findings of the surveys would hold in the real world, funders would need to modify actual grants, says Myers. He acknowledges that this would be a big commitment and a risk, but doing so could have significant benefits for science.

There is growing interest in finding more efficient and effective grant structures. In November, the national funder UK Research and Innovation launched a new Metascience Unit, which is dedicated to finding more sophisticated and efficient ways to make funding and policy decisions. The following month, the US National Science Foundation announced that it would be conducting a series of social and economic experiments to determine how its funding processes can be improved.

As for the survey, Myers hopes the findings can provide insights to inform such initiatives. “As long as we’ve reduced uncertainty about what is the best way forward, that is very valuable,” he says. “We hope that our hypothetical experiments are motivation for more real-world experiments in the future.”

[ad_2]

Source Article Link

Categories
Life Style

scientists use AI to design antibodies from scratch

[ad_1]

Illustration of antibodies (pale pink) attacking influenza viruses.

Antibodies (pink) bind to influenza virus proteins (yellow) (artist’s conception).Credit: Juan Gaertner/Science Photo Library

Researchers have used generative artificial intelligence (AI) to help them make completely new antibodies for the first time.

The proof-of-principle work, reported this week in a preprint on bioRxiv1, raises the possibility of bringing AI-guided protein design to the therapeutic antibody market, which is worth hundreds of billions of dollars.

Antibodies — immune molecules that strongly attach to proteins implicated in disease — have conventionally been made using brute-force approaches that involve immunizing animals or screening vast numbers of molecules.

AI tools that can shortcut those costly efforts have the potential to “democratize the ability to design antibodies”, says study co-author Nathaniel Bennett, a computational biochemist at the University of Washington in Seattle. “Ten years from now, this is how we’re going to be designing antibodies.”

“It’s a really promising piece of research” that represents an important step in applying AI protein-design tools to making new antibodies, says Charlotte Deane, an immuno-informatician at the University of Oxford, UK.

Making mini proteins

Bennett and his colleagues used an AI tool that their team released last year2 that has helped to transform protein design. The tool, called RFdiffusion, allows researchers to design mini proteins that can strongly attach to another protein of choice. But these custom proteins bear no resemblance to antibodies, which recognize their targets by way of floppy loops that have proved difficult to model with AI.

To overcome this, a team co-led by computational biophysicist David Baker and computational biochemist Joseph Watson, both at the University of Washington, modified RFdiffusion. The tool is based on a neural network similar to those used by image-generating AIs such as Midjourney and DALL·E. The team fine-tuned the network by training it on thousands of experimentally determined structures of antibodies attached to their targets, as well as real-world examples of other antibody-like interactions.

Using this approach, the researchers designed thousands of antibodies that recognize specific regions of several bacterial and viral proteins — including those that the SARS-CoV-2 and influenza viruses use to invade cells — and a cancer drug target. They then made a subset of their designs in the laboratory and tested whether the molecules could bind to the right targets.

Watson says that about one in 100 antibody designs worked as hoped — a lower success rate than the team now achieves with other types of AI-designed protein. The researchers determined the structure of one of the influenza antibodies, using a technique called cryo-electron microscopy, and found that it recognized the intended portion of the target protein.

Early proof of principle

A handful of companies are already using generative AI to help develop antibody drugs. Baker and Watson’s team hopes that RFdiffusion can help to tackle drug targets that have proved challenging, such as G-protein coupled receptors — membrane proteins that help to control a cell’s responses to external chemicals.

But the antibodies that RFdiffusion churned out are a long way from reaching the clinic. The designer antibodies that did work didn’t bind to their targets particularly strongly. Any antibody used therapeutically would also need its sequences modified to resemble natural human antibodies so as not to provoke an immune reaction.

The designs are also what’s known as single-domain antibodies, which resemble those found in camels and sharks, rather than the more complex proteins that nearly all approved antibody drugs are based on. These types of antibody are easier to design and simpler to study in the lab, and it makes sense to design these first, says Deane. “But this doesn’t take away from it being a step on the way to the kinds of methods we need.”

“This is proof-of-principle work,” Watson stresses. But he hopes this initial success will pave the way for designing antibody drugs at touch of a button. “It feels like quite a landmark moment. It really shows this is possible.”

[ad_2]

Source Article Link

Categories
Life Style

the career costs for scientists battling long COVID

[ad_1]

Protesters hold placards calling for more research and support for people struggling with long COVID

People with long COVID often struggle to get sufficient support in the workplace; researchers are no exception.Credit: Vuk Valcic/SOPA Images/Shutterstock

Abby Koppes got COVID-19 in March 2020, just as the world was waking up to the unprecedented scale on which the virus was spreading. Her symptoms weren’t bad at first. She spent the early lockdown period in Boston, Massachusetts, preparing her tenure application.

During that summer of frenzied writing, Koppes’s symptoms worsened. She often awoke in the night with her heart racing. She was constantly gripped by fatigue, but she brushed off the symptoms as due to work stress. “You gaslight yourself a little bit, I guess,” she says.

Soon after Koppes submitted her tenure application in July, she began experiencing migraines for the first time, which left her bedridden. Her face felt as if it was on fire, a condition called trigeminal neuralgia that’s also known as suicide disease because of the debilitating pain it causes. Specialists took months to diagnose her with a series of grim-sounding disorders: Sjögren’s syndrome, small-fibre polyneuropathy and postural orthostatic tachycardia syndrome. To make time for the litany of doctors’ appointments, Koppes took a six-month “self-care sabbatical”.

It’s a bit of a blur, she says, but Koppes, a biochemical engineer at Northeastern University in Boston, describes September 2021 to April 2023 as a dark period in her life. Fortunately, she was buoyed by one monumental victory that preceded it: she was granted tenure in summer 2021.

Portrait of Abigail Koppes

Abby Koppes changed her research focus to study her own experience of long COVID.Credit: Adam Glanzman/Northeastern University

However, other academic researchers with long COVID might not count themselves so lucky. Koppes’s experience has compelled her to speak up for other researchers with the condition. It needn’t spell the end of an academic career, provided institutions step up to help. Nature spoke to researchers living with long COVID to find out how they manage the illness amid the pressures of academic research. (Many requested anonymity for privacy or for fear of repercussions on their careers and reputations.) They describe new realities that include budgeting for periods of fatigue and negotiating adjustments such as flexible working arrangements — an area, they say, in which academia can do better.

When academia meets long COVID

Koppes is one of at least 65 million people worldwide to develop long-term health problems after contracting the virus SARS-CoV-2. The World Health Organization defines long COVID as a suite of symptoms lasting two months or longer, continuing or occurring three months after the initial infection.

Common symptoms of long COVID include cognitive impairment, fatigue and immune dysregulation. Weak or overburdened health-care systems in some nations mean many people who have the condition are left without appropriate care.

Moreover, in the cut-throat world of academia, in which it is the norm to push oneself through graduate training and the postdoctoral stage, and as an early-career academic, long COVID throws up barriers for those seeking permanent positions, such as the promised land of tenure.

It could also squeeze diversity out of the talent pool — studies have shown that long COVID tends to disproportionately affect women and people of colour. “Women are already under-represented in higher roles,” says Natalie Holroyd, a computational medicine researcher with long COVID at University College London. “Is this going to exacerbate existing inequality?”

“Getting tenure was so profoundly destructive to my health that it prepared my body for severe long COVID,” one Latina researcher in the humanities tells Nature. “I feel like my academic job demands my death.”

Researchers with long COVID often face extra administrative burdens: dealing with the mountains of paperwork for disability claims and workplace-accommodation requests. These tasks can feel like a part-time job in their own right. “Not only are we trying to get all the same work done with many fewer functional hours, but we also have more work to do,” one US-based biology researcher says. “That doesn’t even count all of the extra hours that we have to spend dealing with getting health care.”

There’s also financial pressure. Researchers might feel the need to soldier on to continue to receive a steady income and, in many cases, employer-provided health insurance. The most vulnerable individuals are graduate students and postdoctoral scholars on temporary contracts. International early-career researchers’ visa status can be contingent on working full-time.

In some cases, seeking accommodations can feel out of reach. “I did not go up to anybody and say, ‘Hey, I’ve been dealing with this the entire two years. Can we do something about it?’” says Priya (not her real name), a master’s student with chronic post-COVID-19 health problems at one of the Indian Institutes of Science Education and Research. Organizing a community to advocate for a better learning and research environment takes time, effort and money. Convinced that the university can’t do much, Priya is resigned to bearing her poor health alone. “There are definitely other people here that have similar issues, but I don’t think there’s been a dialogue about it.”

Academics with long COVID also face societal ignorance about the condition, with several of those Nature spoke to reporting that they were mainly left to fend for themselves or to navigate workplace accommodation policies that aren’t tailored for long COVID. Many researchers conceal their illness for fear of stigma. Even with understanding colleagues, people with long COVID say they’re exhausted from constantly advocating for their needs and educating others about the condition.

Because some symptoms can be invisible, colleagues might negatively judge a co-worker’s performance or ability to participate. When Sarah (not her real name) started her assistant professorship at a US university, colleagues who were aware of her condition would occasionally tell her that she “looked good” during a meeting. “But it’s because I had very carefully managed my day,” she says. To be able to attend an hour-long meeting at the height of her symptoms, Sarah says she would sleep for two hours beforehand, then for another two afterwards to recuperate. “They don’t realize that there are four hours on either side that were devoted to making it possible.”

The need for extra rest can leave those with long COVID little time for pursuing career-advancing opportunities, especially travel. And because reinfection can exacerbate symptoms, crowd-facing activities aren’t safe, either, when masking is not required.

Portrait of Kerstin Sailer

Sociologist Kerstin Sailer had to redefine what it meant to be a researcher living with the disabilities that come with long COVID.Credit: Beatrix Fuhrmann

Many high-achieving researchers with long COVID say that one of their biggest struggles is the loss of their identities that had been pegged to their cognitive abilities and productivity. Often, they learnt the hard way that pushing themselves beyond their limits would only cause them to crash later. “It took me a while to recognize that I am now a disabled academic,” says Kerstin Sailer, a sociology researcher at University College London. She had “to gather around and find my own kind of inner strength and redefine what it means to be me”.

But Sailer and others are a testament to the fact that long COVID need not signal a career dead end. With the right support, affected academics can still thrive.

Accommodations and flexibility

Researchers living with long COVID have found ways to adapt, often relying on assistance from peers. Koppes co-advises all of her students with her husband, an academic at the same university, which is helpful for the days she’s off sick. Other long-haulers have formed online support groups or leaned on collaborators to help them to cross project finishing lines. Kathleen Banks, a public-health doctoral student with long COVID at Boston University in Massachusetts, has an informal dissertation coach who holds her accountable for meeting graduation milestones without pushing her too hard.

Researchers say that the most important form of support is that offered by a compassionate supervisor, be it a department chair or a research adviser. They advise looking for someone who prioritizes your health and doesn’t put undue pressure on you to perform.

Holroyd says she’s grateful for having had the same supportive adviser since her PhD days. “He kept reassuring me that the work that [I’m] putting out is fine, it’s enough,” she says of her now-postdoctoral supervisor. “I’m unlikely to find that level of support elsewhere.”

Ideally, supervisors will also fight for needed accommodations. These can include having a private office, being able to work from home, teaching remotely and having a flexible schedule to deal with an unpredictable ailment.

Employers should also recognize that accommodations, such as virtual working, aren’t one-size-fits-all. Jane (not her real name) is a US-based researcher in the social sciences who developed mast-cell activation syndrome after a COVID-19 infection. In her case, this causes life-threatening allergic reactions to synthetic chemicals in scented products. She requested a high-efficiency particulate air filter for her classroom, but her institution recommended that she teach remotely instead.

However, as other classes at her institution returned to in-person formats, Jane says she noticed that students preferred those to virtual courses such as hers. She’s nervous about the impact this might have on the teaching evaluations that count towards tenure. She has proposed that her institution establish a fragrance-free policy for her office building, but her employers, although receptive, have declined to help her enforce the rules. “It felt like they threw everything at me to advocate for myself,” Jane says. “They basically proposed the remote option as an alternative to all the things that I had requested.”

In many countries, disability laws require employers to make reasonable allowances for disabled workers. Of course, the word ‘reasonable’ is open to interpretation. Not everyone has found workarounds for their job. One mathematics PhD student in the Netherlands quit his programme in his final year after contracting long COVID. And some scholars have pivoted to focus on less physically demanding and more remote-friendly research fields, choosing computational over experimental work, for example, to allow them to sidestep significant hands-on labour.

Many institutions have offered employees with long COVID tenure-clock pauses, deadline extensions and emergency health-related funding. Advocates welcome these short-term support measures, but say more needs to be done. Medical experts don’t know how long the condition might last, so academia needs to formulate long-term policies.

Without such policies, informal arrangements can signal to those with long COVID that they’re a burden. “My experience with the accommodation system has been [that] it just comes down so much to having a supportive principal investigator” to back you up, says one graduate student at a major US university who has long COVID. “That’s just not how it should be.”

Culture shift

Some advocates are calling for a culture that champions workplace accessibility for all: universal design. The concept aims to shift the onus of advocating for particular needs away from the individual. Universal design measures include — by default — live captioning for video-call events and the taking of meeting notes to share with absentees. Researchers with long COVID also advocate for those organizing seminars and conferences to enable remote attendance options.

Brainstorming for these initiatives needs to be a community-wide process, says Emily Shryock, the director of the University of Texas at Austin’s Disability Cultural Center, a community hub for those who identify as disabled and their allies. She recognizes that there will always be tricky situations that have no easy answer. Nevertheless, the broader goal is to reach a middle ground between measures that aren’t required by law any more, such as mask mandates, and individual preferences. “That would be the hope — that every person would feel like they can ask for what they need and be supported in that request, even if, ultimately, they don’t get exactly what they want,” she says.

Sandra R. Schachat sits on the grass holding an award

Postdoctoral fellow Sandra Schachat says being vulnerable to contracting long COVID means she is likely to seek remote-working opportunities next.Credit: Andrés Baresch

Universal design is just the first step; academic culture has a long way to go to becoming more inclusive. People like Holroyd choose to stay with trusted advisers so as not to risk working with someone less empathetic. Others are leaving academia altogether. “Why would I want to spend my entire career begging for safety measures that are essential to my survival?” asks Sandra Schachat, a postdoctoral researcher and Schmidt Science Fellow at the University of Hawaii at Manoa. She has dodged COVID-19 so far, but she has an autoimmune disease and knows it makes her vulnerable to the infection’s chronic fallout. Although she says her current lab is “perfect”, she doesn’t trust the larger academic world to protect people like her. So, when her fellowship ends, she plans to explore a career in industry that will allow her to work remotely.

In academia’s rigid research-assessment system, which is based on the quantity of publications and invited talks a person clocks up, people with chronic illnesses find it incredibly hard to compete. Jane, the social scientist, says her university refuses to make exceptions to the tenure policy for those with long COVID. Other affected researchers call for academic success to be reimagined.

Portrait of Chris Maddison

Chris Maddison says long COVID bolsters calls for more flexible research assessments.Credit: Dan Komoda/Institute for Advanced Study

“I do think that [universities] should broaden what they consider to be impact,” says Chris Maddison, a machine-learning researcher with long COVID at the University of Toronto, Canada. That could mean acknowledging different contributions towards society as being equally valuable. For example, in addition to papers published, his field could also count contributions such as releases of scalable, machine-learning prototypes. Nevertheless, Maddison admits that finding the solution to equitable academic assessment isn’t simple. “Maybe long COVID is just one other impetus to say we need to really solve this problem.”

On an individual level, long COVID has also served as a wake-up call to some researchers in relation to their taxing lifestyles. “It’s really forced me to re-evaluate my relationship with stress and my work–life balance,” says one postdoc in the United Kingdom. Now, she is diligent about pacing herself and feels much less guilty for taking breaks. “This experience has helped me develop healthier habits and skills that I’ll carry with me even after I recover.”

On the flip side, the rigours of academic research have also helped to prepare these scholars for the ups and downs of long COVID. “Science has also trained me [to have] resilience, persistence, patience,” says Sarah. “These are helpful qualities when dealing with chronic conditions.”

Koppes agrees. Inspired by her own conditions, she has shifted her research towards the autoimmunity and neurology of long COVID symptoms to interrogate her experience.

For now, Koppes is celebrating the small victories in her slow recovery: being able to walk the dog or take public transport instead of relying on car rides. On her wall at home hangs a reproduction of a painting by the impressionist artist Edward Henry Potthast titled Wild Surf, Ogunquit, Maine. It depicts a beach that she and her husband frequented pre-COVID-19 — a reminder, she says, not of everything she’s lost, but of what she might one day return to.

[ad_2]

Source Article Link

Categories
Featured

Ridiculously powerful PC with six Nvidia RTX 4090 GPUs and liquid cooling finally gets tested — there’s no game benchmarks, but plenty of tests for scientists and pros

[ad_1]

Comino, known for its liquid-cooled servers, has finally released its new flagship for testing. 

The Comino Grando Server has been designed to meet a broad spectrum of high-performance computing needs, ranging from data analytics to gaming.

In a comprehensive test by StorageReview, the Grando Server, alongside a Grando Workstation variation, was put through a series of rigorous benchmarks including Blender 4.0, Luxmark, OctaneBench, Blackmagic RAW Speed Test, 7-zip Compression, and Y-Cruncher.

Comino Grando Server

(Image credit: Comino)

The server, equipped with six Nvidia RTX 4090s, AMD‘s Threadripper PRO 5995WX CPU, 512GB DDR5 DRAM, a 2TB NVMe SSD, and four 1600W PSUs, delivered impressive results, as you’d expect from those specifications.

[ad_2]

Source Article Link

Categories
Life Style

Why scientists trust AI too much — and what to do about it

[ad_1]

A robotic arm moves through an automated AI-run laboratory

AI-run labs have arrived — such as this one in Suzhou, China.Credit: Qilai Shen/Bloomberg/Getty

Scientists of all stripes are embracing artificial intelligence (AI) — from developing ‘self-driving’ laboratories, in which robots and algorithms work together to devise and conduct experiments, to replacing human participants in social-science experiments with bots1.

Many downsides of AI systems have been discussed. For example, generative AI such as ChatGPT tends to make things up, or ‘hallucinate’ — and the workings of machine-learning systems are opaque.

In a Perspective article2 published in Nature this week, social scientists say that AI systems pose a further risk: that researchers envision such tools as possessed of superhuman abilities when it comes to objectivity, productivity and understanding complex concepts. The authors argue that this put researchers in danger of overlooking the tools’ limitations, such as the potential to narrow the focus of science or to lure users into thinking they understand a concept better than they actually do.

Scientists planning to use AI “must evaluate these risks now, while AI applications are still nascent, because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline”, write co-authors Lisa Messeri, an anthropologist at Yale University in New Haven, Connecticut, and Molly Crockett, a cognitive scientist at Princeton University in New Jersey.

The peer-reviewed article is a timely and disturbing warning about what could be lost if scientists embrace AI systems without thoroughly considering such hazards. It needs to be heeded by researchers and by those who set the direction and scope of research, including funders and journal editors. There are ways to mitigate the risks. But these require that the entire scientific community views AI systems with eyes wide open.

To inform their article, Messeri and Crockett examined around 100 peer-reviewed papers, preprints, conference proceedings and books, published mainly over the past five years. From these, they put together a picture of the ways in which scientists see AI systems as enhancing human capabilities.

In one ‘vision’, which they call AI as Oracle, researchers see AI tools as able to tirelessly read and digest scientific papers, and so survey the scientific literature more exhaustively than people can. In both Oracle and another vision, called AI as Arbiter, systems are perceived as evaluating scientific findings more objectively than do people, because they are less likely to cherry-pick the literature to support a desired hypothesis or to show favouritism in peer review. In a third vision, AI as Quant, AI tools seem to surpass the limits of the human mind in analysing vast and complex data sets. In the fourth, AI as Surrogate, AI tools simulate data that are too difficult or complex to obtain.

Informed by anthropology and cognitive science, Messeri and Crockett predict risks that arise from these visions. One is the illusion of explanatory depth3, in which people relying on another person — or, in this case, an algorithm — for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.

Another risk is that research becomes skewed towards studying the kinds of thing that AI systems can test — the researchers call this the illusion of exploratory breadth. For example, in social science, the vision of AI as Surrogate could encourage experiments involving human behaviours that can be simulated by an AI — and discourage those on behaviours that cannot, such as anything that requires being embodied physically.

There’s also the illusion of objectivity, in which researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases found in those data. “There’s a risk that we forget that there are certain questions we just can’t answer about human beings using AI tools,” says Crockett. The illusion of objectivity is particularly worrying given the benefits of including diverse viewpoints in research.

Avoid the traps

If you’re a scientist planning to use AI, you can reduce these dangers through a number of strategies. One is to map your proposed use to one of the visions, and consider which traps you are most likely to fall into. Another approach is to be deliberate about how you use AI. Deploying AI tools to save time on something your team already has expertise in is less risky than using them to provide expertise you just don’t have, says Crockett.

Journal editors receiving submissions in which use of AI systems has been declared need to consider the risks posed by these visions of AI, too. So should funders reviewing grant applications, and institutions that want their researchers to use AI. Journals and funders should also keep tabs on the balance of research they are publishing and paying for — and ensure that, in the face of myriad AI possibilities, their portfolios remain broad in terms of the questions asked, the methods used and the viewpoints encompassed.

All members of the scientific community must view AI use not as inevitable for any particular task, nor as a panacea, but rather as a choice with risks and benefits that must be carefully weighed. For decades, and long before AI was a reality for most people, social scientists have studied AI. Everyone — including researchers of all kinds — must now listen.

[ad_2]

Source Article Link

Categories
Life Style

Could AI-designed proteins be weaponized? Scientists lay out safety guidelines

[ad_1]

AlphaFold structure prediction for probable disease resistance protein At1g58602.

The artificial-intelligence tool AlphaFold can design proteins to perform specific functions.Credit: Google DeepMind/EMBL-EBI (CC-BY-4.0)

Could proteins designed by artificial intelligence (AI) ever be used as bioweapons? In the hope of heading off this possibility — as well as the prospect of burdensome government regulation — researchers today launched an initiative calling for the safe and ethical use of protein design.

“The potential benefits of protein design [AI] far exceed the dangers at this point,” says David Baker, a computational biophysicist at the University of Washington in Seattle, who is part of the voluntary initiative. Dozens of other scientists applying AI to biological design have signed the initiative’s list of commitments.

“It’s a good start. I’ll be signing it,” says Mark Dybul, a global health policy specialist at Georgetown University in Washington DC who led a 2023 report on AI and biosecurity for the think tank Helena in Los Angeles, California. But he also thinks that “we need government action and rules, and not just voluntary guidance”.

The initiative comes on the heels of reports from US Congress, think tanks and other organizations exploring the possibility that AI tools — ranging from protein-structure prediction networks such as AlphaFold to large language models such as the one that powers ChatGPT — could make it easier to develop biological weapons, including new toxins or highly transmissible viruses.

Designer-protein dangers

Researchers, including Baker and his colleagues, have been trying to design and make new proteins for decades. But their capacity to do so has exploded in recent years thanks to advances in AI. Endeavours that once took years or were impossible — such as designing a protein that binds to a specified molecule — can now be achieved in minutes. Most of the AI tools that scientists have developed to enable this are freely available.

To take stock of the potential for malevolent use of designer proteins, Baker’s Institute of Protein Design at the University of Washington hosted an AI safety summit in October 2023. “The question was: how, if in any way, should protein design be regulated and what, if any, are the dangers?” says Baker.

The initiative that he and dozens of other scientists in the United States, Europe and Asia are rolling out today calls on the biodesign community to police itself. This includes regularly reviewing the capabilities of AI tools and monitoring research practices. Baker would like to see his field establish an expert committee to review software before it is made widely available and to recommend ‘guardrails’ if necessary.

The initiative also calls for improved screening of DNA synthesis, a key step in translating AI-designed proteins into actual molecules. Currently, many companies providing this service are signed up to an industry group, the International Gene Synthesis Consortium (IGSC), that requires them to screen orders to identify harmful molecules such as toxins or pathogens.

“The best way of defending against AI-generated threats is to have AI models that can detect those threats,” says James Diggans, head of biosecurity at Twist Bioscience, a DNA-synthesis company in South San Francisco, California, and chair of the IGSC.

Risk assessment

Governments are also grappling with the biosecurity risks posed by AI. In October 2023, US President Joe Biden signed an executive order calling for an assessment of such risks and raising the possibility of requiring DNA-synthesis screening for federally funded research.

Baker hopes that government regulation isn’t in the field’s future — he says it could limit the development of drugs, vaccines and materials that AI-designed proteins might yield. Diggans adds that it’s unclear how protein-design tools could be regulated, because of the rapid pace of development. “It’s hard to imagine regulation that would be appropriate one week and still be appropriate the next.”

But David Relman, a microbiologist at Stanford University in California, says that scientist-led efforts are not sufficient to ensure the safe use of AI. “Natural scientists alone cannot represent the interests of the larger public.”

[ad_2]

Source Article Link