Categories
Life Style

The beauty of what science can do when urgently needed

[ad_1]

A woman sits in an office room with a blue wall. A chart is shown on the glowing screen behind her.

Cultivarium chief scientific officer Nili Ostrov works to make model organisms more useful and accessible for scientific researchCredit: Donis Perkins

Nili Ostrov has always been passionate about finding ways to use biology for practical purposes. So perhaps it wasn’t surprising that, when the COVID-19 pandemic hit during her postdoctoral studies, she went in the opposite direction from most people, moving to New York City to work as the director of molecular diagnostics in the Pandemic Response Lab, providing COVID-19 tests and surveilling viral variants. She was inspired by seeing what scientists could accomplish and how much they could help when under pressure.

Now the chief scientific officer at Cultivarium in Watertown, Massachusetts, Ostrov is bringing that sense of urgency to fundamental problems in synthetic biology. Cultivarium is a non-profit focused research organization, a structure that comes with a finite amount of time and funding to pursue ‘moonshot’ scientific goals, which would usually be difficult for academic laboratories or start-up companies to achieve. Cultivarium has five years of funding, which started in 2022, to develop tools to make it possible for scientists to genetically engineer unconventional model organisms — a group that includes most microbes.

Typically, scientists are limited to working with yeast, the bacterium Escherichia coli and other common lab organisms, because the necessary conditions to grow and manipulate them are well understood. Ostrov wants to make it easier to engineer other microbes, such as soil bacteria or microorganisms that live in extreme conditions, for scientific purposes. This could open up new possibilities for biomanufacturing drugs or transportation fuels and solving environmental problems.

What is synthetic biology and what drew you to it?

Synthetic biology melds biology and engineering — it is the level at which you say, “I know how this part works. What can I do with it?” Synthetic biologists ask questions such as, what is this part useful for? How can it benefit people or the environment in some way?

During my PhD programme at Columbia University in New York City, my team worked with the yeast that is used for brewing beer — but we asked, can you use these yeast cells as sensors? Because yeast cells can sense their environment, we could engineer them to detect a pathogen in a water sample. In my postdoctoral work at Harvard University in Cambridge, Massachusetts, we investigated a marine bacterium, Vibrio natriegens. A lot of time during research is spent waiting for cells to grow. V. natriegens doubles in number about every ten minutes — the fastest growth rate of any organism.

Could we use it to speed up research? But using V. natriegens and other uncommon research organisms is hard work. You have to develop the right genetic-engineering tools.

How did the COVID-19 pandemic alter your career trajectory?

It pushed me to do something that I otherwise would not have done. During my postdoctoral programme, I met Jef Boeke, a synthetic biologist at New York University. In 2020, he asked me whether I wanted to help with the city’s Pandemic Response Lab, because of my expertise in DNA technology. I’m probably one of the only people with a newborn baby who moved into Manhattan when COVID-19 hit.

That was an amazing experience: I took my science and skills and used them for something essential and urgent. In a couple of months, we set up a lab that supported the city’s health system. We monitored for new variants of the virus using genomic sequencing and ran diagnostic tests.

Seeing what science can do when needed — it was beautiful. It showed me how effective science can be, and how fast science can move with the right set-up.

How did that influence what you’re doing now with Cultivarium?

COVID-19 showed me how urgently needed science can be done. It’s about bringing together the right people from different disciplines. Cultivarium is addressing fundamental problems in science, which is usually done in academic settings, with the fast pace and the dynamic of a start-up company.

We need to make progress on finding ways to use unconventional microbes to advance science. A lot of bioproduction of industrial and therapeutic molecules is done in a few model organisms, such as E. coli and yeast. Imagine what you could achieve if you had 100 different organisms. If you’re looking to produce a protein that needs to be made in high temperatures or at an extreme pH, you can’t use E. coli, because it won’t grow.

How is Cultivarium making unconventional microbes research-friendly?

It took my postdoctoral lab team six years to get to the point where we could take V. natriegens, which we initially didn’t know how to grow well or engineer, and knock out every gene in its genome.

At Cultivarium, we’re taking a more systematic approach to provide those culturing and engineering tools for researchers to use in their organism of choice. This kind of topic gets less funding, because it’s foundational science.

So, we develop and distribute the tools to reproducibly culture microorganisms, introduce DNA into them and genetically engineer them. Only then can the organism be used in research and engineering.

Developing these tools takes many years and a lot of money and skills. It takes a lot of people in the room: a biologist, a microbiologist, an automation person, a computational biologist, an engineer. As a non-profit company, we try to make our tools available to all scientists to help them to use their organism of choice for a given application.

We have funding for five years from Schmidt Futures, a non-profit organization in New York City. We’re already releasing and distributing tools and information online. We’re building a portal where all data for non-standard model organisms will be available.

Which appeals to you more — academic research or the private sector?

I like the fast pace of start-up companies. I like the accessibility of expertise: you can bring the engineer into the room with the biologists. I like that you can build a team of people who all work for the same goal with the same motivation and urgency.

Academia is wonderful, and I think it’s very important for people to get rigorous training. But I think we should also showcase other career options for early-career researchers. Before the pandemic, I didn’t know what it was like to work in a non-academic set-up. And once I got a taste of it, I found that it worked well for me.

This interview has been edited for length and clarity.

[ad_2]

Source Article Link

Categories
Life Style

Deep-sea mining plans should not be rushed

[ad_1]

Employees of Soil Machine Dynamics (SMD) work on a subsea mining machine being built for Nautilus Minerals at Wallsend, northern England April 14, 2014.

Giant excavators for use in deep-sea mining must stay parked for now.Credit: Nigel Roddis/Reuters

For more than a week, representatives of nations around the world have been meeting at a session of the International Seabed Authority (ISA) in Kingston, Jamaica. The ISA was established under the UN Convention on the Law of the Sea 30 years ago with the task of protecting the sea bed in international waters — which comprise roughly half of the world’s ocean. The goal of the latest meeting is to write the rules for the commercial mining of metals such as cobalt, manganese and nickel. These are needed in increasing quantities, mainly to power low-carbon technologies, such as battery storage.

The meeting is set to end on 29 March, and there’s mounting concern among researchers that the final text is being rushed, not least because some countries including China, India, Japan and South Korea want to press ahead with commercial exploitation of deep-sea minerals. Some in the mining industry would like excavations to begin next year.

China dominates the global supply of critical minerals and so far has the most sea-bed exploration licences of any country. These permits do not allow commercial exploitation. One company, meanwhile, The Metals Company, based in Vancouver, Canada, wants to apply for a commercial permit, potentially in late July.

There is little justification for such haste. Commercial sea-bed mining is not permitted for a reason: too little is known about the deep-sea ecosystem, such as its biodiversity, and its interactions with other ecosystems, and the impact of disturbance from commercial operations. Until we have the results of long-term studies, the giant robotic underwater excavators, drills and pumps that are ready to go must remain parked. Researchers have told Nature that the text is nowhere near ready, and that important due diligence is being circumvented. Outstanding issues need to be resolved, such as what is considered an acceptable level of environmental harm and how much contractors should pay the ISA for the right to extract minerals.

Last month, the ISA published the latest draft of its mining regulations text. This ran to 225 pages, and researchers and conservation groups were alarmed to see that, unlike previous drafts, it incorporated proposals that would speed up the process for issuing commercial permits, and it also weakened environmental protections.

Worryingly, a few of the changes in the latest text were not identified by square brackets — the practice in international negotiations to highlight wording that has not been agreed on by all parties. Nor were the sources for some changes attributed.

Furthermore, in an earlier version of the text, there was a proposal to include measures to protect rare or fragile ecosystems, but this wording is not in the latest draft. Another suggestion was to require that mining applications be decided on within 30 days of their receipt, rather than waiting for the ISA’s twice-yearly meeting — an idea that has support from some in the industry and that does appear in the latest draft.

Proposing changes to draft texts is normal in a negotiation, but failing to publicly identify who is proposing them is not. It is damaging to trust and a risk to reaching an outcome in which all parties are happy.

Questions are rightly being asked of the leadership of the ISA secretariat, which organizes meetings and is responsible for producing and distributing texts, as well as the leadership of the ISA’s governing council. Nature has reached out to the secretariat with questions, but no response was received by the time this editorial went to press. We urge the ISA to respond, engage and explain.

It is possible that the benefits to low-carbon technologies outweigh the risks of deep-sea mining if these are mitigated. But some 25 countries are calling for a moratorium on the practice, at least until the science is better understood. The European Parliament also backs a moratorium. This is also the official view of the High Level Panel for a Sustainable Ocean Economy, a group of 18 countries that pledged to not undertake commercial deep-sea mining in their national waters — despite founding member Norway’s decision to open up applications for commercial licences, which the European Parliament has criticized.

The UN Convention on Migratory Species is urging that its member states should neither encourage nor engage in deep-sea mining “until sufficient and robust scientific information has been obtained to ensure that deep-seabed mineral exploitation activities do not cause harmful effects to migratory species, their prey and their ecosystems”.

The ISA and its member states should exercise care, make their decisions on a consensus of evidence and be transparent in doing so, because transparency is foundational to the success of international relations. The deep seas are the least explored parts of the planet; we should not allow for their loss before we even understand their complexities.

[ad_2]

Source Article Link

Categories
Life Style

how harsh visa-application policies are hobbling global research

[ad_1]

In February, I was meant to speak at the European Conference of Tropical Ecology in Lisbon, providing evidence of extinction risks to some frog species used as bushmeat in West Africa, and highlighting the need for policies that regulate hunting pressures.

In January, I duly applied at the Dutch embassy in Accra for a business visa to the European Union Schengen area. My application included the invitation from the conference organizers, a letter from my sponsors — the Center for International Forestry Research and World Agroforestry, and the UK Global Challenges Research Fund’s Trade, Development and the Environment Hub — and an introductory letter from the dean of graduate studies at the University of Ghana, confirming my status as a final-year PhD candidate. It also included current and old passports that showed my extensive travels, mostly to the United Kingdom.

Almost three weeks later, my passport was returned with a rejection note, stating that I had not provided justification for the purpose and conditions of my intended stay, and that there were reasonable doubts about my intention to leave the EU before the visa expired.

I wasn’t the only one. Of the ten speakers from low- and middle-income countries (LMICs) invited to present at the conference’s “Wildmeat: opportunities and risks” session, only four got visas. Another person withdrew voluntarily.

The participation of researchers from LMICs at international conferences on biodiversity is of the utmost importance. Earth’s biodiversity is richest in these nations, and includes ecosystems that provide important services, such as carbon sequestration, that benefit people globally. Our participation is not a matter of simply ticking the inclusivity boxes, but a deliberate effort to ensure that the voices of people for whom some of these conservation policies are formulated are heard, and their opinions sought.

However, whereas colleagues from wealthy nations, even as undergraduate students, can easily go to LMICs to participate in conferences and do research, the same cannot be said for those going the other way. The same documentation that scientists from high-income countries present at embassies — sponsorship, invitation and introductory letters — are apparently inadequate when submitted by people from LMICs. According to a global survey in 2018 by the research organization RAND, African and Asian researchers are the most likely to have visa-related challenges for short-term visits (see go.nature.com/2z9dabp). A 2023 analysis by the Royal Society in London showed that in 2022, of the 30 territories for which the United Kingdom refused visitor visas most often, 22 were in Africa (see go.nature.com/3vxruba).

These refusals come at a huge cost to individual researchers. Visa applications require scientists to be studious with paperwork, commit often large sums of money and make several trips to embassies that are sometimes outside their home country. My experience left me feeling demoralized, embarrassed and insulted by the implication that I and people like me couldn’t be trusted to attend a conference without outstaying our welcome.

This broken situation also comes at a cost to institutions. My sponsor spent approximately US$1,500 on my visa fee, return flight, insurance and conference registration (all non-refundable). Conference organizers and host institutions in wealthy countries spend a lot of time and effort searching for and inviting credible and accomplished researchers from LMICs to be part of the global conservation effort — time and money that is often wasted, which then discourages meeting organizers from prioritizing speakers from those nations.

Moreover, it comes at a cost to global efforts to prevent further biodiversity loss. Many high-income countries say that they are committed to global biodiversity conservation, and governments are pledging billions of dollars in support. Their visa policies for researchers should reflect this priority.

I am not suggesting that embassies should operate without caution and issue visas without due diligence. But they should ensure that eligible candidates who meet the criteria are not prevented from participating in international discourse. This requires a distinct form of short-term visa review for scientists attending conferences, seminars, workshops and research programmes, and a commitment to improve communication channels between visa-issuing authorities, conference organizers and academic institutions, both in the countries hosting the events and in those that researchers are travelling from.

Part of this is ensuring that entry-clearance officers do not fixate on a scientist’s financial worth as a measure of the credibility of their intent to return to their home countries. Bank statements are often required to support visa applications. Mine show the grants I have received from the Rufford Foundation, Synchronicity Earth and the BaNGA-Africa/Carnegie Corporation of New York. Others are not so fortunate, and this approach risks allowing only well-off, established scientists to obtain short-stay visas — automatically preventing many early-career researchers from participating in global research conversations.

Conferences are where research collaborations are formed and where decisions on funding, publishing and policymaking are made. It is imperative that visa issues do not block scientists from LMICs benefiting from the opportunities they provide.

Competing Interests

The author declares no competing interests.

[ad_2]

Source Article Link

Categories
Life Style

The ‘Mother Tree’ idea is everywhere — but how much of it is real?

[ad_1]

It was a call from a reporter that first made ecologist Jason Hoeksema think things had gone too far. The journalist was asking questions about the wood wide web — the idea that trees communicate with each other through an underground fungal network — that seemed to go well beyond what Hoeksema considered to be the facts.

Hoeksema discovered that his colleague, Melanie Jones, was becoming restive as well: her peers, she says, “had been squirming for a while and feeling uncomfortable with how the message had morphed in the public literature”. Then, a third academic, mycorrhizal ecologist Justine Karst, took the lead. She thought speaking out about the lack of evidence for the wood wide web had become an ethical obligation: “Our job as scientists is to present the truth, as close as we can get to it”.

Their concerns lay predominantly with a depiction of the forest put forward by Suzanne Simard, a forest ecologist at the University of British Columbia in Vancouver, in her popular work. Her book Finding the Mother Tree, for example, was published in 2021 and swiftly became a bestseller. In it she drew on decades of her own and others’ research to portray forests as cooperating communities. She said that trees help each other out by dispatching resources and warning signals through fungal networks in the soil — and that more mature individuals, which she calls mother trees, sometimes prioritize related trees over others.

The idea has enchanted the public, appearing in bestselling books, films and television series. It has inspired environmental campaigners, ecology students and researchers in fields including philosophy, urban planning and electronic music. Simard’s ideas have also led to recommendations on forest management in North America.

But in the ecology community there is a groundswell of unease with the way in which the ideas are being presented in popular forums. Last year, Karst, at the University of Alberta in Edmonton, Canada; Hoeksema, at the University of Mississippi in Oxford; and Jones, at the University of British Columbia in Kelowna, Canada, challenged Simard’s ideas in a review1, digesting the evidence and suggesting that some of Simard’s descriptions of the wood wide web in popular communications had “overlooked uncertainty” and were “disconnected from evidence”. They were later joined by other researchers, including around 30 forest and fungal scientists, who published a separate paper that questioned the scientific credibility2 of two popular books about forests — one of them Simard’s — saying that some of the claims in her book “do not correctly reflect, and even contradict, the data”. The article warns of “the perils of plant personification”, saying that the desire to humanize plant life “may eventually harm rather than help the commendable cause of preserving forests”. Another review of the evidence appeared in May last year3.

Simard, however, disagrees with these characterizations of her work and is steadfast about the scientific support for her idea that trees cooperate through underground fungal networks. “They’re reductionist scientists,” she says when asked about criticism of her work. “They’ve missed the forest for the trees.” She is concerned that the debate over the details of the theory diminishes her larger goal of forest protection and renewal. “The criticisms are a distraction, to be honest, from what’s happening in our ecosystems.”

Robert Kosak, dean of the faculty of forestry at the University of British Columbia, supports Simard and calls her “a world-renowned scientist, a strong advocate for science-based environmental solutions, an amazing communicator, mentor, and teacher, and a wonderful colleague”.

The dispute offers a window into how scientific ideas take shape and spread in popular culture — and raises questions about what the responsibilities of scientists are as they communicate their ideas more widely.

Conversation starter

In her book, Simard tells of an idyllic childhood, with summers spent in the ancient forests of British Columbia. While an undergraduate, she worked at a forestry company, witnessing clear-cut logging at first hand. The experience set the course of her career. On graduating, she took a government forest-service post, and joined the University of British Columbia in 2002. She still works there, running a research programme called the Mother Tree Project, which develops sustainable forest-renewal practices.

One of Simard’s earliest papers appeared in Nature4 in 1997, describing evidence that carbon could travel underground between trees of different species, and suggesting that this could be through an underground fungal network. Nature put the paper on its cover and dubbed the idea the wood wide web — a term that quickly caught on and is now widely used to describe the idea (Nature’s news team is editorially independent of its journal team).

Tree leaves turn sunshine and carbon dioxide into sugars, and some of this flows to their roots and into mycorrhizal fungi, which grow into the root tip and donate water and nutrients in return. There was already evidence, from a laboratory study5, that carbon can move through the tendrils of the fungi that link seedling roots together. But Simard’s approach, a controlled experiment in clear-cut forest, was “groundbreaking”, says David Johnson, who studies the ecology of soil microbes at the University of Manchester, UK.

Paper Birch (Betula papyrifera) trees in autumn at Pictured Rocks National Lakeshore, Michigan, in 2019.

Forest ecologist Suzanne Simard’s 1997 study looked at carbon transfer between Douglas fir (Pseudotsuga menziesii) and paper birch trees (Betula papyrifera, pictured).Credit: Steve Gettle/Nature Picture Library

She planted pairs of seedlings — one paper birch (Betula papyrifera) and one Douglas fir (Pseudotsuga menziesii) — close to one another. She shaded the Douglas fir to prevent it from manufacturing sugars. Then she bathed the air surrounding each seedling with traceable, labelled carbon dioxide. She found carbon in sugars made by the birch in the needles of the shaded Douglas fir. Smaller quantities of sugars from the fir were found in the birch.

A third seedling in each group — western red cedar (Thuja plicata) — which is not colonized by the same types of mycorrhizal fungi, absorbed less carbon than did the other seedlings. The results, the authors concluded4, suggest that carbon transfer between birch and Douglas fir “is primarily through the direct hyphal pathway”. That is, there could be an active fungal pipeline connecting the roots of both trees.

Over the years, Simard and other researchers developed in published work the idea that there could be a common mycorrhizal network in the forest soil, connecting many trees of the same and different species.

About a decade ago, Simard began to take the idea further, and into the media. In a short film called Mother Trees Connect the Forest, she said of forest trees: “These plants are really not individuals in the sense that Darwin thought they were individuals competing for survival of the fittest. In fact, they’re interacting with each other, trying to help each other survive.”

In 2016, in a TED talk that has had more than 5.6 million views, she portrayed forest trees as “not just competitors” — competition being foundational to the understanding of how ecosystems work — “but as cooperators”. Her 1997 experiment, she said, had revealed evidence for a “massive underground communications network”. Her later work, she added in the TED talk, found that some bigger, older “mother trees”, as she called them, are particularly well connected. They nurture their young — preferentially sending them carbon and making space for them in their root systems. What’s more, “when mother trees are injured or dying, they also send messages of wisdom on to the next generation of seedlings.”

Then came her book — a memoir and detailed account of her work. It has been praised for its vivid and personal depiction of the scientific life.

The book concludes that to escape environmental devastation, humans should adopt attitudes to nature that are similar to those of Indigenous people. “This begins by recognizing that trees and plants have agency,” she writes.

Simard has worked to change forestry practices in North America in line with her ideas, for example by sparing the oldest trees during clear-cutting so that they can provide an infrastructure for the next generation of planted trees.

Challenging ideas

But academics were increasingly concerned that the ideas and the publicity that they were attracting had moved beyond what was warranted by the scientific evidence.

The disquiet came to a head when the 2023 scientific review1 was published. The authors, Hoeksema, Jones and Karst, have all collaborated scientifically with Simard in the past; Jones was an author of the 1997 paper. The review considers the evidence for popular claims made about the wood wide web.

Their review has drawn praise for its scholarship. It is “the gold standard of how one should tackle a contentious and important field”, says James Cahill, who studies plant behaviour at the University of Alberta.

Simard takes the opposite view: the paper, she says, fails to see the bigger picture, and its prominence is “an injustice to the whole world”.

The review laid out what the authors regard as the three key claims underlying the popular idea of the ‘mother tree’: that networks of different fungi linking the roots of different trees — known as common mycorrhizal networks (CMNs) — are widespread in forests; that resources pass through such networks, benefiting seedlings; and that mature trees preferentially send resources along the networks to their kin. The scientists concluded that the first two are insufficiently supported by the scientific evidence, and that the last “has no peer-reviewed, published evidence”.

Some elements of the wood-wide-web idea are not in dispute, they say. For instance, mycorrhizal fungi can latch onto multiple roots of the same plant; one species of fungus can connect with the roots of different species of plant; and mycelia — a cobweb of fungal tendrils — can spread over large distances.

But evidence for a CMN in trees — one in which an individual fungus links the roots of the same or different tree species — is patchy, the review authors say. There are well-documented CMNs that link certain plants together: some orchids use CMNs to connect with trees, for instance, so that the orchids can feed on tree sugars when they can’t make their own.

And lab studies have shown that a single fungus can link seedlings of different tree species. But, the authors say, the lab studies compare with the forest in the same way that human cells grown in a dish compare with human bodies.

The review authors found that the strongest evidence for a CMN among trees in the field comes from five studies published between 2006 and 2020 — some led by ecologist Kevin Beiler, when he was a PhD student in Simard’s group. Beiler, who is now at the University for Sustainable Development in Eberswalde, Germany, used DNA techniques to map the networks of genetically distinct fungi in patches of old-growth forest, and found that they linked many trees of different ages, all Douglas fir — and the larger the tree, the greater the extent of its connections.

Portrait of Suzanne Simard next to a flowering tree.

Suzanne Simard is the scientist most closely associated with the idea of the ‘wood wide web’.Credit: PA Images/Alamy

But Karst says that this doesn’t prove that the fungus was simultaneously connecting different trees, because mycelia decay easily and the technique would have picked up strands that are defunct, as well as alive. And that arduous mapping exercise has been repeated for just two tree species — hardly grounds for generalization, she says.

So, do these common networks exist? “The consensus seems to be they are probably there but we do need more people to go out and map them at a fine scale to show that,” says Jones.

The second claim explored by the review is that resources travel through the CMN and benefit seedlings. It has three parts. The first — that resources do, by some means, travel through the soil between plants, commands some support, say the review authors. For example, they highlight research in a Swiss forest in which the canopies of certain trees had been bathed in labelled carbon dioxide. The experiment showed that carbon ended up in the roots of nearby trees.

But the authors say that proving the second two parts of the claim — that a CMN is the major conduit, and that seedlings typically benefit — is tricky. Lab and field studies often cannot rule out that resources moved through the soil for at least part of the way. The review highlighted three lab studies that directly observed carbon moving from one tree seedling to another through a mycorrhizal link, and these “are still the best evidence for the movement of resources within a CMN formed by woody plant species”, say the authors.

In the forest, the authors found 26 experiments reporting carbon transfer, but for each transfer, there was an alternative explanation for how the carbon travelled.

Some studies don’t look for a CMN but simply assess whether growing a seedling next to an adult tree improves its performance. For every instance in which a seedling benefited, the review states, there was another study in which its growth was inhibited. The results are “a huge smear from positive effects to negative effects and mostly neutral”, says Hoeksema.

The third claim is that mature trees communicate preferentially with offspring through CMNs, for example sending warning signals after an attack.

“When I heard that out in public I thought ‘Holy cow, that’s extraordinary’,” says Karst.

The team did find one lab experiment, published in 2017 and led by Brian Pickles, who did the work as a postdoc in Simard’s department, that found that if seedlings were related then more carbon was transferred between them. But it happened in only two of the four lineages of seedlings, and it happened even when fungi were prevented from making links with each other — suggesting that one fungus exuded it into the soil and the other picked it up, the researchers say. In the review, the authors write that, for the third claim, “there is no current evidence from peer-reviewed, published field studies”.

Karst says that one reason why ideas about mother trees and their kin have traction in the public domain is that Simard, in media interviews and her book, has implied that findings made in the greenhouse were actually made in the forest, making the evidence seem stronger than it is. Simard disagrees. “I do not, and would never, imply anything misleading when presenting research.”

Karst gives the example of a passage from Simard’s book that describes a visit to a field site made by Simard and her master’s student, Amanda Asay. In October 2012, Asay was exploring a question that is important for forestry — do seedlings stand a better chance of survival if they grow near their mother tree, and, if so, is this because they receive preferential help through a common mycorrhizal network? Asay had blocked such connections in control seedlings by planting them in mesh bags with pores too small for fungi to fit through. What she found in that forest experiment, Simard says in her book, matched the theory that trees help their kin through networks. “Seedlings that were [the mother tree’s] kin survived better and were noticeably bigger than those that were strangers linked into the network, a strong hint that Douglas-fir mother trees could recognize their own.” Yet, when the review authors accessed Asay’s master’s thesis6, they found that her field work had discovered the opposite: that more non-kin seedlings survived than did kin (although the trend was not significant). As for the role of networks, the thesis states: “Our hypothesis that kin recognition is facilitated by mycorrhizal networks, however, was not supported”.

When asked about the discrepancy, Simard says that Asay also did greenhouse experiments for her master’s thesis, which used pairs of older and younger tree seedlings, and showed that older seedlings recognized younger kin and sent them more resources than they did to non-kin. After that, Asay and others in the team did find evidence that “there are responses that clearly show kin selection in those trees”.

Simard says that, when describing Asay’s findings in the forest in 2012, she made a writer’s choice to situate other findings as if they were discovered in the forest on that day. “I situated the story in the field, because that’s where the question came from.” That description, she says, encompasses “the whole body of work”.

Light micrograph of a washed spruce root with fuzzy fibers of ectomycorrhiza.

A spruce tree root with ectomycorrhizal fungi.Credit: Eye of Science/Science Photo Library

Asay’s subsequent work has not yet been published, for a tragic reason: she died in an accident in 2022. Her death was devastating for the group and publication stalled, Simard says. “We’re about to publish those papers,” she says.

Karst, Jones and Hoeksema’s overall conclusion is that CMNs do exist in the plant kingdom, and that resources can travel along them, benefiting at least one party, and sometimes both. In the forest, myriad mycelia extend through the soil that are capable of linking with trees, including those of different species. Whether they form a live thoroughfare, and whether resources travel through it between trees, has yet to be demonstrated in the field. Whether there are, in general, kin effects between plants was beyond the scope of their review, but the authors found nothing to support the idea that forest trees target kin through common mycorrhizal networks.

Their review also looked at the literature and found that some scientists have selectively cited and quoted from studies, boosting the credibility of the idea. The main problem, the review concludes, is not the quality of the science. “The most concerning issue is the rigour with which the results of these studies have been transmitted and interpreted.”

Rigour and reaction

Most of the response to the review has been positive, says Jones. “We got a lot of letters saying ‘thank you for doing that, it’s such a relief’. But I was really surprised how many of our colleagues said ‘you are brave’. That shouldn’t be, that you would have to be brave.”

But some researchers have taken issue with aspects of the review. Johnson disagrees with the team’s decision to exclude evidence for similar networks elsewhere in the plant kingdom, including between orchids and trees, and in grasslands and heathlands. It means, he says, they were “ignoring 90% of the work … our default position should be that we should expect mycorrhizal fungi to connect many plants”. It’s important, he says, to take a collective view of the evidence.

He agrees with the conclusion, however, that Simard’s idea of the cooperating forest is incompatible with evolutionary theory. “It’s all about the plants supporting each other for these altruistic reasons. I think that’s completely rubbish.”

Johnson’s view is that it “makes complete sense” that there are CMNs linking multiple forest trees and that substances might travel from one to another through them. Crucially, he says, this is not due to the trees supporting one another. A simple explanation, compatible with evolutionary theory, is that the fungi are acting to protect the trees that are their source of energy. It is beneficial for fungi to activate a tree’s defence signals, or to top up food for temporarily ailing trees. Pickles, who spent six years working with Simard before moving to the University of Reading, UK, says Simard’s ideas are not incompatible with competition, but give more weight to well-known phenomena in ecology, such as mutualism, in which organisms cooperate for mutual benefit. “It’s not altruism. It’s not some outrageous idea,” he says. “She certainly focuses more on facilitation and mutualism than is traditional in these fields, and that’s probably why there’s a lot of pushback.”

Other ecologists agree that there is some “polarization” in ecology between cooperative and competitive ideas. “The idea that perhaps not everything is trying to kill everything else is helpful,” says Katie Field, who studies plant-soil processes at the University of Sheffield, UK.

Regardless of the differences of opinion, Pickles says, “It’s good to have this rigorous analysis.”

Frustrating debate

Simard is exasperated by the debate.

Her work, she says, has “changed our whole world view of how the forest works”. There are now “dozens and dozens” of people “who have found that stuff moves through networks and through the soil”.

She says that the quality of her science has been unfairly challenged. To say that her 200 published papers are “not valid science, which I think is what they’re saying … that it was wrong … is not right,” she says. She is in the process of submitting responses to the critical papers to two journals, she says.

She says that she is unfairly accused of claiming CMNs are the only pathway for resources to travel between trees, and that she acknowledges other pathways in her papers and her book.

In media appearances, it’s hard to make that clear, she says: “It’s a very short period of time, and I don’t get into all those other evolutionary reasons for these things.”

Simard maintains that her critics attack her in the academic literature for imagery she has used only in public communication: “I talked about the mother tree as a way of communicating the science and then these other people say it’s a scientific hypothesis. They misuse my words.”

She argues that changing our understanding of how forests work from ‘winner takes all’ to ‘collaborative, integrated network system’ is essential for fixing the rampant destruction of old-growth forest, especially in British Columbia, where her research has focused. Indigenous cultures that have a more sustainable relationship with forests have mother and father trees, she says — “but the European male society hates the mother tree … somebody needs to write a paper on that”.

“I’m putting forward a paradigm shift. And the critics are saying ‘we don’t want a paradigm shift, we’re fine, just the way we are’. We’re not fine.”

Simard also says that Karst held a position partially funded by members of Canada’s Oil Sands Innovation Alliance that constitutes a conflict of interest. Extraction of oil deposits is associated with forest loss and environmental damage, and Karst was studying land reclamation after extraction. Karst says that she held this position until 2021, terminating it before starting work on the review, and that the work it funded did not overlap with the focus of the review on mycorrhizal networks.

Taking the research forwards will be challenging, says Johnson. Karst and her colleagues have produced an agenda for future field research — from mapping the genotypes of trees and fungi in a range of forests to using controls in experiments more stringently. But the agenda doesn’t impress Johnson. “It’s almost impossible to fulfil,” he says, partly because fieldwork is so fiendishly difficult.

Some scientists say that Simard’s popular work has had a positive influence on the field, even if elements of it remain controversial. Her work propelled the mycorrhizal research community from an obscure and underfunded field to one that excites the public, says Field. That has unleashed funding, stimulated researchers’ imaginations and influenced research agendas.

The backlash has further energized the community, she says. There are plans for a special edition of a journal she edits, and sessions have been added to the upcoming meeting of the International Mycorrhizal Society. All of this is helpful, says Field. “Anything that makes people think again and look again at the evidence is good.”

[ad_2]

Source Article Link

Categories
Life Style

How AI is improving climate forecasts

[ad_1]

Climate scientist Tapio Schneider is delighted that machine learning has taken the drudgery out of his day. When he first started modelling how clouds form, more than a decade ago, this mostly involved painstakingly tweaking equations that describe how water droplets, air flow and temperature interact. But since 2017, machine learning and artificial intelligence (AI) have transformed the way he works.

“Machine learning makes this science a lot more fun,” says Schneider, who works at the California Institute of Technology in Pasadena. “It’s vastly faster, more satisfying and you can get better solutions.”

Conventional climate models are built manually from scratch by scientists such as Schneider, who use mathematical equations to describe the physical processes by which the land, oceans and air interact and affect the climate. These models work well enough to make climate projections that guide global policy.

But the models rely on powerful supercomputers, take weeks to run and are energy-intensive. A typical model consumes up to 10 megawatt hours of energy to simulate a century of climate, says Schneider. On average, that is about the amount of electricity used annually by a US household. Moreover, such models struggle to simulate small-scale processes, such as how raindrops form, which often have an important role in large-scale weather and climate outcomes, says Schneider.

The branch of AI called machine learning — in which computer programs learn by spotting patterns in data sets — has shown promise in weather forecasting and is now stepping in to help with these issues in climate modelling.

“The trajectory of machine learning for climate projections is looking really promising,” says computer scientist Aditya Grover at the University of California, Los Angeles. Similar to the early days of weather forecasting, he says, there is a flurry of innovation that promises to transform how scientists model the climate.

But there are still hurdles to overcome — including convincing everyone that models based on machine learning are getting their projections right.

Copy cats

Researchers are using AI for climate modelling in three main ways. The first approach involves developing machine-learning models called emulators, which produce the same results as conventional models without having to crank through all the mathematical calculations.

Think of a conventional climate model as a computer program that can calculate where a ball will land on the basis of physical factors, such as how hard the ball is thrown, where it is thrown from and how fast it is spinning. Emulators can be considered as equivalent to a sports player who learns the patterns in all those modelled outputs and is then able to predict, without crunching through all the maths, where the ball will land.

In a 2023 study, climate scientist Vassili Kitsios at the Commonwealth Scientific and Industrial Research Organisation in Melbourne, Australia, and his colleagues developed 15 machine-learning models that could emulate 15 physics-based models of the atmosphere1. They trained their system, called QuickClim, using the physical models’ projections of surface air temperature up to the year 2100 for two atmospheric carbon concentration pathways: a low and a high carbon emission scenario. Training each model took about 30 minutes on a laptop, says Kitsios. Researchers then asked the QuickClim models to forecast temperatures under a medium carbon emission scenario, which the models had not seen during training. The results closely matched those of the conventional physics-based models (see ‘AI climate model works at speed’).

AI climate model works at speed. Graphic showing similarity between a physics-based climate model and the AI emulator.

Source: Ref. 1

Once trained with all three emissions scenarios, QuickClim could quickly predict how global surface temperatures would change during the century under many carbon emission scenarios — about one million times faster than the conventional model could, says Kitsios. “With traditional models, you have less than five or so carbon concentration pathways you can analyse. QuickClim now allows us to do many thousands of pathways — because it’s fast,” he says.

QuickClim could one day help policymakers by exploring multiple scenarios, which would take conventional approaches simply too long to simulate. Models such as QuickClim will not replace physics-based models, Kitsios says, but could work alongside them.

Another team of researchers, led by atmospheric scientist Christopher Bretherton at the Allen Institute for Artificial Intelligence in Seattle, Washington, developed a machine-learning emulator for one physics-based atmospheric model. In a 2023 preprint study2, the team first created a training data set for the model, called ACE, by feeding ten sets of initial atmospheric conditions into a physics-based model. For each set, the physics-based model projected how 16 variables, including air temperature, water vapour and windspeed, would change over the next decade.

After training, ACE was able to iteratively use estimates from 6 hours earlier in its projections to make forecasts 6 hours ahead, over a time span of up to a decade. And it performed well: better than a pared-down version of the physics-based model that runs at half the resolution to save on time and computing power. In that comparison, ACE more accurately predicted the state of 90% of the atmospheric variables, ran 100 times faster and was 100 times more energy-efficient.

Study author and climate scientist Oliver Watt-Meyer at the Allen Institute for Artificial Intelligence says he was surprised. “I was impressed by the result. These early findings suggest that we’ll be able to make these models that are very fast, accurate and able to probe a lot of different scenarios,” he says.

Firm foundations

In the second approach, researchers are using AI in a more fundamental way, to power the guts of climate models. These ‘foundation’ models can later be tweaked to perform a wide range of downstream climate- and weather-related tasks.

Foundation models hinge on the idea that there are fundamental, possibly unknown, patterns in the data that are predictive of the future climate, says Grover. By picking up on these hidden patterns, the hope is that foundation models might be able to churn out better climate and weather predictions than conventional approaches can, he says.

In a 2023 paper3, Grover and researchers at the tech giant Microsoft built the first such foundation model, called ClimaX. It was trained on the output from five physics-based climate models that simulated the global weather and climate from 1850 to 2015, including factors such as air temperature, air pressure and humidity, simulated on timescales from hours to years. Unlike emulator models, ClimaX was not trained towards the specific task of mimicking an existing climate model.

After this general training, the team fine-tuned ClimaX to perform a wide range of tasks. In one, the model predicted the average surface temperature, daily temperature range and rainfall worldwide from input variables of carbon dioxide, sulphur dioxide, black carbon and methane levels. This task was proposed in 2022 as a benchmark for comparing AI climate models, in a study by atmospheric physicist Duncan Watson-Parris at the University of California, San Diego, and his colleagues4. ClimaX predicted the state of temperature-related variables better than did three climate emulators built by Watson-Parris’s team3. However, it performed less well than the best of these three emulators in predicting rainfall, says Grover.

“I like the idea of foundation models,” says Watson-Parris. But these early findings don’t yet prove that ClimaX can outperform conventional climate models, or that foundation models are intrinsically superior to emulators, he adds.

In fact, it will be difficult to convince people that any machine-learning model can outperform conventional approaches, says Schneider. The true state of the future climate is unknown and we can’t wait for decades to see how well the models are performing, he says. Testing climate models against past climate behaviour is useful, but not a perfect measure of how well they can predict a future that’s likely to be vastly different from what humanity has seen before. Perhaps if models get better at seasonal weather prediction, they’ll be better at long-term climate predictions, too, says Schneider. “But to my knowledge, that’s not yet been demonstrated and that’s no guarantee,” he says.

Moreover, it is hard to interpret the way in which many of the AI models work, a problem known as the the black box of AI, which could make it hard to trust them. “With climate projections, you absolutely need to trust the model to extrapolate,” says Watson-Parris.

Best of both

A third approach is to embed machine-learning components inside physics-based models to produce hybrid models — a sort of compromise, says Schneider.

An aerial view of thick snow covering houses and trees

Snow cover is hard for conventional climate models to predict, but hybrid models that blend machine-learning and physics-based techniques have successfully simulated snow cover and other small-scale processes.Credit: Mario Tama/Getty

In this case, machine-learning models would replace only the parts of conventional models that work less well — typically the modelling of small-scale, complex and important processes such as cloud formation, snow cover and river flows. These are a “key sticking point” in standard climate modelling, says Schneider. “I think the holy grail really is to use machine learning or AI tools to learn how to represent small-scale processes,” he says. Such hybrid models could perform better than purely physics-based models, while being more trustworthy than models built entirely from AI, he says.

In this vein, Schneider and his colleagues have built physical models of Earth’s atmosphere and land that contain machine-learning representations of a handful of such small-scale processes. They perform well, he says, in tests of river-flow and snow-cover projections against historical observations5. “We’ve found machine-learning models can be more successful than physical models in simulating certain phenomena,” says Schneider. Watson-Parris agrees with that assessment.

By the end of the year, Schneider and his team hope to complete a hybrid model of the ocean that can be coupled to the atmosphere and land models, as part of their Climate Modeling Alliance (CliMA) project.

Similar efforts to create ‘digital twins’ of Earth are being developed by NASA and the European Commission. The European project, called Destination Earth (DestinE), is entering its second phase in June this year, in which machine learning will have a key role, says Florian Pappenberger, who leads the forecast department at the European Centre for Medium-Range Weather Forecasts in Reading, UK.

The ultimate goal, says Schneider, is to create digital models of Earth’s systems, partly powered by AI, that can simulate all aspects of the weather and climate down to kilometre scales, with great accuracy and at lightning speed. We’re not there yet, but advocates say this target is now in sight.

[ad_2]

Source Article Link

Categories
Life Style

How a tree-hugging protest transformed Indian environmentalism

[ad_1]

Fifty years ago this week, Gaura Devi, an ordinary woman from a nondescript village in India, hugged a tree, using her body as a shield to stop the tree from being cut down. Little did she know that this simple act of defiance would be a seminal moment in the history of India and the world. Or that Reni village, where she lived, would come to be recognized as the fountainhead of the Chipko environmental movement.

Chipko, in Hindi, means ‘to stick’ or ‘to cling’. In the early 1970s, the Western Himalayan regions of Garhwal and Kumaon, where Reni is situated, were in turmoil. Villagers had been using non-violent methods, including tree hugging, to save their local forests from industrial logging for several months by the time Gaura Devi — and about two dozen women from Reni — showed up on the scene1. But the courage of this small group of women, who stood their ground against loggers who hurled threats and abuses at them, shot the movement to international attention.

What followed holds lessons for a planet teetering on the edge of a climate crisis: marginalized communities can succeed in catapulting environmental concerns into the global spotlight through innovative protest tactics. The Chipko movement gave rise to India’s Forest Conservation Act of 1980, the express aim of which is to conserve woodlands. A few years later, a new federal environment ministry was set up to act as a nodal agency for the protection of biodiversity and to safeguard the country’s environment13. Even the origin of the term tree hugger — which has since acquired pejorative connotations — can be traced back to the grassroots ecological consciousness that surfaced in India’s villages.

The movement and its aftermath hold sobering lessons, too. Villagers who threatened to cling on to trees were voicing concern not just about the state of the forests, but also about their own lives and livelihoods. Their desire was to exercise greater local control over woodland resources. Women such as Gaura Devi, for instance, had to walk long distances to gather firewood once the forests were denuded1,4.

Beginning in the late 1960s, activists who took inspiration from the leader of India’s anti-colonial nationalist campaign, Mohandas Gandhi, had begun to mobilize villagers in the Western Himalayas. Their strategy to improve economic opportunity in the region hinged on the Gandhian vision of bottom-up development. A network of cottage industries and cooperatives began to be set up to market forest products. The government’s competing top-down approach of auctioning forests to big private contractors came as an unwelcome intrusion3,5,6.

In essence, what the foot soldiers of Chipko wanted was an acknowledgement of their Indigenous rights to access forest resources that were crucial for their survival. What they got instead was a national law and a ministry populated by a new breed of power brokers — who, in the years to come, would decide at times that habitat preservation is possible only by keeping local communities out.

The big debate

Garhwal and Kumaon, part of the present-day state of Uttarakhand, were at the heart of independent India’s first big debate on environmental justice and equity for a reason. The terrain is mountainous and most of the land is forested. Lives and livelihoods centre heavily around access to land and water resources. Apart from subsistence agriculture, the main source of income in the region 50 years ago was remittance — money sent home from men who had migrated to cities or joined the armed forces2,4,5.

Although daily life was economically precarious for the villagers, the hills also presented them with a fragile environment. In the years preceding the Chipko movement, floods and landslides had wreaked havoc. Some of the villages worst affected lay near forests that had been felled1,5,6.

The idea of ‘commons’ and ‘sacred forests’ had been an intrinsic part of the cultural ethos of rural India, but the colonial period frayed the bonds that villagers tended to have with their immediate environment. The British Raj’s primary source of income was land revenue. As a result, converting forest or common land into agricultural land by getting rid of existing vegetation was very lucrative1,2.

Things did not improve after independence — the Indian government’s fourth five-year plan (1969–74) directed the state forest departments to take control of forests and open lands. This policy resulted in more restrictions to access for the locals, who depended on nearby woodlands to meet their needs for food, fruit, fodder, firewood and other raw materials2,3.

The spark that ignited Gaura Devi’s tree-hugging protest came when the provincial government handed over ash trees in the Chamoli district of Garhwal to a private contractor to make sports goods. This disregarded the request put forward by a local artisan’s cooperative, the Dashauli Gram Swarajya Sangh (DGSS, Society for Village Self-Rule), which wanted to use the trees to make agricultural implements1,5.

The manner of protest itself was not new. A year earlier, in March 1973, in the nearby village of Mandal, women and men had come together to prevent the felling of trees under the leadership of a local activist, Chandi Prasad Bhatt, who was associated with the DGSS. As word spread, the act of chipko, or embracing a tree, became an andolan — a movement — which united people across social, caste and age groups, with even children participating in many villages1,5.

However, the protest in Reni village is now recognized as a seminal moment. Gaura Devi was an ordinary woman. But her extraordinary act continues to stand as a prominent signpost in the evolution of India’s ecological consciousness, even 50 years later.

Surprising saviours

On the day the trees near Reni village were to be felled, neither the DGSS members nor the men of the village were present. This was no coincidence, but a deliberate plan by forest department staff, who had organized meetings elsewhere to minimize the possibility of a large-scale protest. However, what they did not account for was the leadership of Gaura Devi, who headed the village’s mahila mandal (women’s group). On being alerted by a young girl who had seen the bus carrying the loggers, Gaura Devi marshalled the women of the village. They put their bodies in front of the axe-wielding men, eventually forcing the loggers to leave1,5.

A woman sits beside a cracked wall of her house in India

In Joshimath, India, cracks developed in homes in January 2023 as the town began to sink.Credit: Brijesh Sati/AFP/Getty

What made these village women, whose roles were conventionally restricted to the home, come out in force to protect the trees? The environmental activist Vandana Shiva, adopting an ecofeminist lens, argues that women, especially in rural areas, share close bonds with nature because their daily tasks are entwined with nature7. For the historian Ramachandra Guha, however, although Chipko did see women participating on a scale like never before, it would be simplistic to reduce it to a women’s movement. For Guha, Chipko is a peasant movement centred on the environment, in which both men and women were involved1,5.

Chipko is also synonymous with two men: Bhatt and Sunderlal Bahuguna. Both had strong roots in the community, having worked with voluntary organizations based on the Gandhian ideology of non-violence and satyagraha (which loosely translates as ‘truth force’). Through eco-development camps, Bhatt worked tirelessly to raise awareness about the fragility of the region’s environment. Bahuguna’s padayatras (journeys on foot) across India brought Chipko to the attention of people in other parts of the country and across the world. Chipko thus began to spread3,5,6.

In the forests of the Western Ghats in the south Indian state of Karnataka, Chipko inspired similar protests called Appiko (meaning ‘cling’ in the local language, Kannada). Internationally, Bahuguna took Chipko to university lecture halls in western Europe, and the simple idea of hugging trees for protection also resonated with activists in Canada and the United States6. In 1987, the movement was awarded the Right Livelihood Award, known as the alternative Nobel prize, for its impact on the conservation of natural resources in India.

The afterlife

Over the years, Chipko has been interpreted and reinterpreted by academics and activists. It has been the subject of many books, peer-reviewed papers and popular articles, and is mentioned in the curriculum of Indian schools. Chipko has a prominent place in the discourse on sustainability, too — as an example of the demand for sustainable development at a regional or local level. In March 2018, to commemorate the 45th anniversary of the movement, an iconic photograph of women joining hands around a tree appeared as a Google doodle, highlighting the movement’s international fame.

An immediate effect of the 1974 Reni protest was a 15-year moratorium on tree felling4. A slew of laws and regulations for protecting the forest came into effect. Ironically, Chipko, which had set these laws in motion, resulted in local communities losing access to the very forests that met their livelihood and subsistence needs. Little changed in terms of development or employment opportunities for the locals. With forest protection prioritized, even minor development projects, such as village roads or small irrigation channels, were denied permission. At the same time, large infrastructure projects promoted by the government, such as hydroelectric dams, got the go-ahead2.

The fragility of the landscape has steadily worsened. In February 2021, a catastrophic landslide in Chamoli district caused the death of some 200 people. What made the disaster worse were the multiple hydropower plants situated in the path of the landslides. In January 2023, disaster struck again when the town of Joshimath in Chamoli began sinking. Cracks developed on roads and in homes, and people had to be moved to relief camps. The unplanned development of the town on top of an earthquake-induced subsidence zone was a key reason. But a persistent concern in the region is its intrinsic ecological vulnerability, compounded in recent years by climate change.

What is the relevance of Chipko today? According to the United Nations, all of us are living amid the triple planetary crises of climate change, biodiversity collapse and air pollution. Humanity has also transgressed six out of the nine ‘planetary boundaries’ that ensure Earth stays in a safe operating space8. In the context of these monumental concerns, it’s remarkable that Chipko continues to inspire.

Social and environmental movements in India are still guided by its spirit. It is a strategy used by non-governmental organizations, activists and citizen groups in their fight against development projects that adversely affect tree cover. Thus, hundreds of Chipko-like movements have bloomed in villages and cities across India, inspired by a simple idea — hugging a tree to save it — and by the courage of village folk.

A villager from Chamoli, Dhan Singh Rana, wrote a song describing the life and struggles of Gaura Devi, in which he says, “In this world of injustice, show us your miracle again.”3 As the world careens from one crisis to the next, it is more imperative than ever to rekindle the memory of Gaura Devi. It should inspire us to act to save the planet and contribute to sustainable change, putting aside any misgivings about our own limitations as individuals or communities.

[ad_2]

Source Article Link

Categories
Life Style

How PhD assessment needs to change

[ad_1]

Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.

Person coughing into their elbow while in bed.

The field of audiomics combines artificial intelligence tools with human sounds, such as a coughs, to evaluate health.Credit: Getty

A machine-learning tool shows promise for detecting COVID-19 and tuberculosis from a person’s cough. While previous tools used medically annotated data, this model was trained on more than 300 million clips of coughing, breathing and throat clearing from YouTube videos. Although it’s too early to tell whether this will become a commercial product, “there’s an immense potential not only for diagnosis, but also for screening” and monitoring, says laryngologist Yael Bensoussan.

Nature | 5 min read

Reference: arXiv preprint (not peer reviewed)

Everyone knows that usually, when it comes to charged particles, opposites attract. But in liquids, birds of a feather can flock together. Researchers investigating the long-standing mystery of why like-charged particles in solution can be drawn to each other have found that the nature of the solvent is key. The way that the liquid molecules arrange themselves around the particles can generate enough ‘electrosolvation force’ to overcome electromagnetic repulsion. The findings might require “a major re-calibration of basic principles that we believe govern the interaction of molecules and particles, and that we encounter at an early stage in our schooling,” says physical chemist and co-author Madhavi Krishnan.

Physics World | 6 min read

Reference: Nature Nanotechnology paper

A study of 34 years of online discussions from Usenet to YouTube shows that, when it comes to rude behaviour, people — not platforms — are the root of incivility. Researchers used Google’s artificial-intelligence (AI) ‘toxicity classifier’ to identify “rude, disrespectful or unreasonable” comments. They found that over three decades, longer discussions tend to be more toxic, but heated debates don’t necessarily escalate or drive away participants. “Despite changes in social media and social norms over time, certain human behaviours persist, including toxicity,” says data scientist and co-author Walter Quattrociocchi.

El País | 3 min read

Reference: Nature paper

Reader poll

A bar chart illustrating responses to the question “Do you think PhD assessment needs to change?”

Last week, a Nature editorial argued that the way PhDs are assessed needs to change. Briefing readers largely agree.

“I acknowledge granting a PhD is a messy business — there is no fixed bar that candidates have to meet to successfully defend,” says recently minted physics PhD Kai Shinbrough. He says that more transparency around the process would go a long way to alleviate candidates’ anxieties.

Readers’ suggestions included assessing dissertations in a similar way to grant proposals — in writing, with iterative feedback cycles — or opening theses to public comments. Many felt there should be more emphasis on evaluating PhD projects on their originality, methods and analysis rather than their ‘positive’ or ‘negative’ outcomes.

Others highlighted that assessment shouldn’t be generalized across all academic disciplines with their varying contexts, cultures and histories. “Breathless demands for sweeping innovation in yet another domain of higher education would certainly lead to additional demands on the time and workloads of supervisors” and further disincentivize PhD supervision, says linguist Mark Post.

Several readers felt that their supervisors’ hands-off leadership left them to mostly fend for themselves and missing out on learning important skills, such as grant writing or lab management. “Research should not be a painful or solitary endeavour, it should be a communal effort driven by individuals committed to serving society,” says linguist Izadora Silva Pimenta.

Features & opinion

A massive mouse-embryo map tracks the development of more than 12 million cells as they mature into organs and other tissues. Building these cell atlases typically requires multinational collaborations and lots of cash. But this one was completed in one year by a three-researcher team on a US$370,000 budget. Among the first insights from the data is that the transcriptome (the cells’ set of messenger RNA) changes most dramatically in the hour just after birth: it’s “the most stressful moment in your life”, says geneticist and atlas co-creator Jay Shendure.

Nature | 6 min read

Reference: Nature paper

A casualty of faster-than-light travel and a teenage remnant of Homo sapiens grapple over whether it was all worth it in the latest short story for Nature’s Futures series.

Nature | 6 min read

Song differences too subtle for people to hear are the special spice that make some male zebra finches (Taeniopygia guttata) particularly attractive to females. Researchers used an AI algorithm to analyse various acoustic features and create maps of the songs’ syllables. Female finches preferred songs with wider statistical gaps on these maps — they seem to be harder to learn and therefore indicate the singer’s fitness. How this statistical distance translates into sonic quality isn’t clear yet. “They all sounded like just a regular learned zebra finch on to our ear,” says neuroscientist and study co-author Todd Roberts.

Nature Podcast | 30 min listen

Subscribe to the Nature Podcast on Apple Podcasts, Google Podcasts or Spotify, or use the RSS feed.

Share your snaps

Primate-behaviour researcher Bing Lin took this photograph of a troop of gelada monkeys (Theropithecus gelada) making their way across Ethiopian highlands under a gathering storm in 2017. It was just one moment of many Lin captured during a year of studying the monkeys whilst living in a tent nearby. It was also the moment that won Nature’s 2022 Working Scientist photo competition. Now, the competition returns. You could see your photo of scientists taking part in their craft — in or out of the lab — published in Nature, plus win a cash prize and a year’s subscription. Find out more information here.

Quote of the day

Biophysicist Esther Osarfo-Mensah says that using AI to help produce summaries of papers might help disseminate research to non-specialists. (Nature Index | 7 min read)

[ad_2]

Source Article Link

Categories
Life Style

Larger or longer grants unlikely to push senior scientists towards high-risk, high-reward work

[ad_1]

An analog clock and a ball of US paper currency balanced on a seesaw weight scale.

The duration and value of a grant are not likely to alter the research strategies of recipients in the United States.Credit: DigitalVision/Getty

Offering professors more money or time isn’t likely to dramatically change how they do their research, a survey of US-based academics has found.

The survey, described in a preprint article posted on arXiv in December1, was completed by 4,175 professors across several disciplines, including the natural sciences, social sciences, engineering, mathematics and humanities.

The study’s authors, Kyle Myers and Wei Yang Tham, both economists at Harvard Business School in Boston, Massachusetts, say the aim was to investigate whether senior scientists would conduct their research differently if they had more money but less time, or vice versa.

The research comes amid interest from some funders in tweaking the amount of time and money awarded to scientists to incentivize them to do more socially valuable work. For instance, in 2017, the Howard Hughes Medical Institute in Chevy Chase, Maryland, announced that it had extended its grants from five to seven years, arguing that the extra time would allow researchers to “take more risk and achieve more transformative advances”.

Acknowledging that the most reliable way to test how grant characteristics might affect researchers’ work is to award them actual grants — which was not feasible — Myers and Tham instead presented them with hypothetical scenarios.

The survey respondents were asked what research strategies they would pursue if they were offered a certain sum of grant money for a fixed time period. Both the value and duration were randomly assigned. The hypothetical grants were worth US$100,000 to $2 million and ran between two and ten years.

To capture the changes in strategy, the survey provided the participants with five options that they could take if they successfully obtained the hypothetical grant. These included pursuing riskier projects — for example, those with only a small chance of success – or ones that were unrelated to their current work and increasing the speed or size of their ongoing projects.

The survey revealed that longer grants increased the researchers’ willingness to pursue riskier projects — but this held true only for tenured professors, who can afford to take a gamble because they tend to have long-term job security, an established reputation and access to more resources. The authors note, however, that any change in research strategy that resulted from receiving a longer grant was not substantial.

Non-tenured professors were not swayed towards risk-taking when they received longer grants. This finding suggests that longer grant designs don’t take into account the pressures that come with shorter employment contracts, says Myers. “If you’re a professor who’s on a 1- or 2-year contract, where you have to get renewed every year, then the difference between a 5-year or 10-year grant is not as important as performing in the next year or two,” he says.

Both tenured and non-tenured professors said longer, larger grants would slow down how fast they worked, “which suggests a significant amount of racing in science is in pursuit of resources”, the authors say, adding that this effect was also minor.

Myers and Tham report that the professors were “very unwilling” to reduce the amount of grant funding in exchange for a longer duration. “Money is much more valuable than time,” they conclude. They found that the professors valued a 1% increase in grant money nearly four times more than a 1% increase in grant duration. The study concludes that the researchers didn’t seem a to view the length of a single grant as “an important constraint on their research pursuits given their preferences, incentives and expected access to future funding sources”.

Experimenting with grant structures

Carl Bergstrom, a biologist at the University of Washington in Seattle who has studied science-funding models, says it’s interesting that substantial changes in grant structure generally yielded little to no change in the researchers’ hypothetical behaviour. “I just don’t know what to make of that,” he says, noting that it’s unclear whether this finding is a result of the study design, or is saying something about scientists’ attitude towards change. “One consistent explanation of all of this would be that fairly reasonable changes in the structure of one particular individual grant don’t do enough to change the overall incentive structure that scientists face for them to alter their behaviour.”

Bergstrom adds that modifying grant structures can still be a valuable exercise that could result in different kinds of candidate applying for and securing funding, which in turn might affect the kind of research that is produced. Myers and Tham didn’t examine whether modifying grant structures would affect the diversity of the pool of candidates, but they have investigated the nuances of risk-taking in research in another study, also posted as a preprint in December2. Researchers were surveyed about their appetite for risky science and how it affected their approach to grants. The survey found a strong link between the perceived risk of research and the amount of time spent applying for grants.

To get a clearer understanding of whether the findings of the surveys would hold in the real world, funders would need to modify actual grants, says Myers. He acknowledges that this would be a big commitment and a risk, but doing so could have significant benefits for science.

There is growing interest in finding more efficient and effective grant structures. In November, the national funder UK Research and Innovation launched a new Metascience Unit, which is dedicated to finding more sophisticated and efficient ways to make funding and policy decisions. The following month, the US National Science Foundation announced that it would be conducting a series of social and economic experiments to determine how its funding processes can be improved.

As for the survey, Myers hopes the findings can provide insights to inform such initiatives. “As long as we’ve reduced uncertainty about what is the best way forward, that is very valuable,” he says. “We hope that our hypothetical experiments are motivation for more real-world experiments in the future.”

[ad_2]

Source Article Link

Categories
Life Style

How did the Big Bang get its name? Here’s the real story

[ad_1]

“Words are like harpoons,” UK physicist and astronomer Fred Hoyle told an interviewer in 1995. “Once they go in, they are very hard to pull out.” Hoyle, then 80 years old, was referring to the term Big Bang, which he had coined on 28 March 1949 to describe the origin of the Universe. Today, it is a household phrase, known to and routinely used by people who have no idea of how the Universe was born some 14 billion years ago. Ironically, Hoyle deeply disliked the idea of a Big Bang and remained, until his death in 2001, a staunch critic of mainstream Big Bang cosmology.

Several misconceptions linger concerning the origin and impact of the popular term. One is whether Hoyle introduced the nickname to ridicule or denigrate the small community of cosmologists who thought that the Universe had a violent beginning — a hypothesis that then seemed irrational. Another is that this group adopted ‘Big Bang’ eagerly, and it then migrated to other sciences and to everyday language. In reality, for decades, scientists ignored the catchy phrase, even as it spread in more-popular contexts.

The first cosmological theory of the Big Bang type dates back to 1931, when Belgian physicist and Catholic priest Georges Lemaître proposed a model based on the radioactive explosion of what he called a “primeval atom” at a fixed time in the past. He conceived that this primordial object was highly radioactive and so dense that it comprised all the matter, space and energy of the entire Universe. From the original explosion caused by radioactive decay, stars and galaxies would eventually form, he reasoned. Lemaître spoke metaphorically of his model as a “fireworks theory” of the Universe, the fireworks consisting of the decay products of the initial explosion.

However, Big Bang cosmology in its modern meaning — that the Universe was created in a flash of energy and has expanded and cooled down since — took off only in the late 1940s, with a series of papers by the Soviet–US nuclear physicist George Gamow and his US associates Ralph Alpher and Robert Herman. Gamow hypothesized that the early Universe must have been so hot and dense that it was filled with a primordial soup of radiation and nuclear particles, namely neutrons and protons. Under such conditions, those particles would gradually come together to form atomic nuclei as the temperature cooled. By following the thermonuclear processes that would have taken place in this fiery young Universe, Gamow and his collaborators tried to calculate the present abundance of chemical elements in an influential 1948 paper1.

Competing ideas

The same year, a radically different picture of the Universe was announced by Hoyle and Austrian-born cosmologists Hermann Bondi and Thomas Gold. Their steady-state theory assumed that, on a large scale, the Universe had always looked the same and would always do so, for eternity. According to Gamow, the idea of an ‘early Universe’ and an ‘old Universe’ were meaningless in a steady-state cosmology that posited a Universe with no beginning or end.

Over the next two decades, an epic controversy between these two incompatible systems evolved. It is often portrayed as a fight between the Big Bang theory and the steady-state theory, or even personalized as a battle between Gamow and Hoyle. But this is a misrepresentation.

George Gamow sitting in a chair at a desk in front of a celestial photograph hanging on the wall

Soviet–US nuclear physicist George Gamow was an early proponent of Big Bang cosmology.Credit: Bettmann/Getty

Both parties, and most other physicists of the time, accepted that the Universe was expanding — as US astronomer Edwin Hubble demonstrated in the late 1920s by observing that most galaxies are rushing away from our own. But the idea that is so familiar today, of the Universe beginning at one point in time, was widely seen as irrational. After all, how could the cause of the original explosion be explained, given that time only came into existence with it? In fact, Gamow’s theory of the early Universe played almost no part in this debate.

Rather, a bigger question at the time was whether the Universe was evolving in accordance with German physicist Albert Einstein’s general theory of relativity, which predicted that it was either expanding or contracting, not steady. Although Einstein’s theory doesn’t require a Big Bang, it does imply that the Universe looked different in the past than it does now. And an ever-expanding Universe does not necessarily entail the beginning of time. An expanding Universe could have blown up from a smaller precursor, Lemaître suggested in 1927.

An apt but innocent phrase

On 28 March 1949, Hoyle — a well-known popularizer of science — gave a radio talk to the BBC Third Programme, in which he contrasted these two views of the Universe. He referred to “the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past”. This lecture was indeed the origin of the cosmological term ‘Big Bang’. A transcript of the talk was reproduced in full in the BBC’s The Listener magazine, and Hoyle mentioned it in his 1950 book The Nature of the Universe, which was based on a series of BBC broadcasts he made earlier the same year.

Although Hoyle resolutely dismissed the idea of a sudden origin of the Universe as unacceptable on both scientific and philosophical grounds, he later said that he did not mean it in ridiculing or mocking terms, such as was often stated. None of the few cosmologists in favour of the exploding Universe, such as Lemaître and Gamow, was offended by the term. Hoyle later explained that he needed visual metaphors in his broadcast to get across technical points to the public, and the casual coining of ‘Big Bang’ was one of them. He did not mean it to be derogatory or, for that matter, of any importance.

Hoyle’s ‘Big Bang’ was a new term as far as cosmology was concerned, but it was not in general contexts. The word ‘bang’ often refers to an ordinary explosion, say, of gunpowder, and a big bang might simply mean a very large and noisy explosion, something similar to Lemaître’s fireworks. And indeed, before March 1949, there were examples in the scientific literature of meteorologists and geophysicists using the term in their publications. Whereas they referred to real explosions, Hoyle’s Big Bang was purely metaphorical, in that he did not actually think that the Universe originated in a blast.

The Big Bang was not a big deal

For the next two decades, the catchy term that Hoyle had coined was largely ignored by physicists and astronomers. Lemaître never used ‘Big Bang’ and Gamow used it only once in his numerous publications on cosmology. One might think that at least Hoyle took it seriously and promoted his coinage, but he returned to it only in 1965, after a silence of 16 years. It took until 1957 before ‘Big Bang’ appeared in a research publication2, namely in a paper on the formation of elements in stars in Scientific Monthly by the US nuclear physicist William Fowler, a close collaborator of Hoyle and a future Nobel laureate.

Before 1965, the cosmological Big Bang seems to have been referenced just a few dozen times, mostly in popular-science literature. I have counted 34 sources that mentioned the name and, of these, 23 are of a popular or general nature, 7 are scientific papers and 4 are philosophical studies. The authors include 16 people from the United States, 7 from the United Kingdom, one from Germany and one from Australia. None of the scientific papers appeared in astronomy journals.

Among those that used the term for the origin of the Universe was the US philosopher Norwood Russell Hanson, who in 1963 coined his own word for advocates of what he called the ‘Disneyoid picture’ of the cosmic explosion. He called them ‘big bangers’, a term which still can be found in the popular literature — in which the ultimate big banger is sometimes identified as God.

A popular misnomer

A watershed moment in the history of modern cosmology soon followed. In 1965, US physicists Arno Penzias and Robert Wilson’s report of the discovery of the cosmic microwave background — a faint bath of radio waves coming from all over the sky — was understood as a fossil remnant of radiation from the hot cosmic past. “Signals Imply a ‘Big Bang’ Universe” announced the New York Times on 21 May 1965. The Universe did indeed have a baby phase, as was suggested by Gamow and Lemaître. The cosmological battle had effectively come to an end, with the steady-state theory as the loser and the Big Bang theory emerging as a paradigm in cosmological research. Yet, for a while, physicists and astronomers hesitated to embrace Hoyle’s term.

Robert Wilson and Arno Penzias in front of a radio astronomy antenna

Work by US physicists Arno Penzias and Robert Wilson vindicated the Big Bang theory.Credit: Bettmann/Getty

It took until March 1966 for the name to turn up in a Nature research article3. The Web of Science database lists only 11 scientific papers in the period 1965–69 with the name in their titles, followed by 30 papers in 1970–74 and 42 in 1975–79. Cosmology textbooks published in the early 1970s showed no unity with regard to the nomenclature. Some authors included the term Big Bang, some mentioned it only in passing and others avoided it altogether. They preferred to speak of the ‘standard model’ or the ‘theory of the hot universe’, instead of the undignified and admittedly misleading Big Bang metaphor.

Nonetheless, by the 1980s, the misnomer had become firmly entrenched in the literature and in common speech. The phrase has been adopted in many languages other than English, including French (théorie du Big Bang), Italian (teoria del Big Bang) and Swedish (Big Bang teorin). Germans have constructed their own version, namely Urknall, meaning ‘the original bang’, a word that is close to the Dutch oerknal. Later attempts to replace Hoyle’s term with alternative and more-appropriate names have failed miserably.

The many faces of the metaphor

By the 1990s, ‘Big Bang’ had migrated to commercial, political and artistic uses. During the 1950s and 1960s, the term frequently alluded to the danger of nuclear warfare as it did in UK playwright John Osborne’s play Look Back in Anger, first performed in 1956. The association of nuclear weapons and the explosive origin of the Universe can be found as early as 1948, before Hoyle coined his term. As its popularity increased, ‘Big Bang’ began being used to express a forceful beginning or radical change of almost any kind — such as the Bristol Sessions, a series of recording sessions in 1927, being referred to as the ‘Big Bang’ of modern country music.

In the United Kingdom, the term was widely used for a major transformation of the London Stock Exchange in 1986. “After the Big Bang tomorrow, the City will never be the same again,” wrote Sunday Express Magazine on 26 October that year. That use spread to the United States. In 1987, the linguistic journal American Speech included ‘Big Bang’ in its list of new words and defined ‘big banger’ as “one involved with the Big Bang on the London Stock Exchange”.

Today, searching online for the ‘Big Bang theory’ directs you first not to cosmology, but to a popular US sitcom. Seventy-five years on, the name that Hoyle so casually coined has indeed metamorphosed into a harpoon-like word: very hard to pull out once in.

[ad_2]

Source Article Link

Categories
Life Style

Weird new electron behaviour in stacked graphene thrills physicists

[ad_1]

Illustration showing four graphene layers.

Electrons in stacked sheets of staggered graphene collectively act as though they have fractional charges at ultra-low temperatures.Credit: Ramon Andrade 3DCiencia/Science Photo Library

Minneapolis, Minnesota

Last May, a team led by physicists at the University of Washington in Seattle observed something peculiar. When the scientists ran an electrical current across two atom-thin sheets of molybdenum ditelluride (MoTe2), the electrons acted in concert, like particles with fractional charges. Resistance measurements showed that, rather than the usual charge of –1, the electrons behaved similar to particles with charges of –2/3 or –3/5, for instance. What was truly odd was that the electrons did this entirely because of the innate properties of the material, without any external magnetic field coaxing them. The researchers published the results a few months later, in August1.

That same month, this phenomenon, known as the fractional quantum anomalous Hall effect (FQAHE), was also observed in a completely different material. A team led by Long Ju, a condensed-matter physicist at the Massachusetts Institute of Technology (MIT) in Cambridge, saw the effect when they sandwiched five layers of graphene between sheets of boron nitride. They published their results in February this year2 — and physicists are still buzzing about it.

At the American Physical Society (APS) March Meeting, held in Minneapolis, Minnesota, from 3 to 8 March, Ju presented the team’s findings, which haven’t yet been replicated by other researchers. Attendees, including Raquel Queiroz, a theoretical physicist at Columbia University in New York City, said that they thought the results were convincing, but were scratching their heads over the discovery. “There is a lot we don’t understand,” Queiroz says. Figuring out the exact mechanism of the FQAHE in the layered graphene will be “a lot of work ahead of theorists”, she adds.

Although the FQAHE might have practical applications down the line — fractionally charged particles are a key requirement for a certain type of quantum computer — the findings are capturing physicists’ imagination because they are fundamentally new discoveries about how electrons behave.

“I don’t know anyone who’s not excited about this,” says Pablo Jarillo-Herrero, a condensed-matter physicist at MIT who was not involved with the studies. “I think the question is whether you’re so excited that you switch all your research and start working on it, or if you’re just very excited.”

Strange maths

Strange behaviour by electrons isn’t new.

In some materials, usually at temperatures near absolute zero, electrical resistance becomes quantized. Specifically it’s the material’s transverse resistance that does this. (An electrical current encounters opposition to its flow in both the same direction as the current — called longitudinal resistance — and in the perpendicular direction — what’s called transverse resistance.)

Quantized ‘steps’ in the transverse resistance occur at multiples of electron charge: 1, 2, 3 and so on. These plateaus are the result of a strange phenomenon: the electrons maintain the same transverse resistance even as charge density increases. That’s a little like vehicles on a highway moving at the same speed, even with more traffic. This is known as the quantum Hall effect.

In a different set of materials, with less disorder, the transverse resistance can even display plateaus at fractions of electron charge: 2/5, 3/7 and 4/9, for example. The plateaus take these values because the electrons collectively act like particles with fractional charges — hence the fractional quantum Hall effect (FQHE).

Key to both phenomena is a strong external magnetic field, which prevents electrons from crashing into each other and enables them to interact.

A photo of the team. From left to right: Long Ju, Postdoc Zhengguang Lu, visiting undergraduate Yuxuan Yao, graduate student Tonghang Hang.

(Left to right) Long Ju, Zhengguang Lu, Yuxuan Yao and Tonghang Hang are all part of the team at MIT that demonstrated the FQAHE in layered graphene.Credit: Jixiang Yang

The FQHE, discovered in 1982, revealed the richness of electron behaviour. No longer could physicists think of electrons as single particles; in delicate quantum arrangements, the electrons could lose their individuality and act together to create fractionally charged particles. “I think people don’t appreciate how different [the fractional] is from the integer quantum Hall effect,” says Ashvin Vishwanath, a theoretical physicist at Harvard University in Cambridge. “It’s a new world.”

Over the next few decades, theoretical physicists came up with models to explain the FQHE and predict its effects. During their exploration, a tantalizing possibility appeared: perhaps a material could exhibit resistance plateaus without any external magnetic field. The effect, now dubbed the quantum anomalous Hall effect — ‘anomalous’, for the lack of a magnetic field — was finally observed in thin ferromagnetic films by a team at Tsinghua University in Beijing, in 20123.

Carbon copy

Roughly a decade later, the University of Washington team reported the FQAHE for the first time1, in a specially designed 2D material: two sheets of MoTe2 stacked on top of one another and offset by a twist.

This arrangement of MoTe2 is known as a moiré material. Originally used to refer to a patterned textile, the term has been appropriated by physicists to describe the patterns in 2D materials created from atom-thin lattices when they are stacked and then twisted, or staggered atop one another. The slight offset between atoms in different layers of the material shifts the hills and valleys of its electric potential. And it effectively acts like a powerful magnetic field, taking the place of the one needed in the quantum Hall effect and the FQHE.

Xiaodong Xu, a condensed-matter physicist at the University of Washington, talked about the MoTe2 discovery at the APS meeting. Theory hinted that the FQAHE would appear in the material at about a 1.4º twist angle. “We spent a year on it, and we didn’t see anything,” Xu told Nature.

Anomalous behaviour. Graphic showing the details of new moire material.

Source: Adapted from Ref. 2.

Then, the researchers tried a larger angle — a twist of about 4º. Immediately, they began seeing signs of the effect. Eventually, they measured the electrical resistance and spotted the signature plateaus of the FQAHE. Soon after, a team led by researchers at Shanghai Jiao Tong University in China replicated the results4.

Meanwhile at MIT, Ju was perfecting his technique, sandwiching graphene between layers of boron nitride. Similar to graphene, the sheets of boron nitride that Ju’s team used were a mesh of atoms linked together in a hexagonal pattern. Its lattice has a slightly different size than graphene; the mismatch creates a moiré pattern (see ‘Anomalous behaviour’).

Last month, Ju published a report2 about seeing the characteristic plateaus. “It is a really amazing result,” Xu says. “I’m very happy to see there’s a second system.” Since then, Ju says that he’s also seen the effect when using four and six layers of graphene.

Both moiré systems have their pros and cons. MoTe2 exhibited the effect at a few Kelvin, as opposed to 0.1 Kelvin for the layered graphene sandwich. (Low temperatures are required to minimize disorder in the systems.) But graphene is a cleaner and higher-quality material that is easier to measure. Experimentalists are now trying to replicate the results in graphene and find other materials that behave similarly.

Moiré than bargained for

Theorists are relatively comfortable with the MoTe2 results, for which the FQAHE was partly predicted. But Ju’s layered graphene moiré was a shock to the community, and researchers are still struggling to explain how the effect happens. “There’s no universal consensus on what the correct theory is,” Vishwanath says. “But they all agree that it’s not the standard mechanism.” Vishwanath and his colleagues posted a preprint proposing a theory that the moiré pattern might not be that important to the FQAHE5.

One reason to doubt the importance of the moiré is the location of the electrons in the material: most of the activity is in the topmost layer of graphene, far away from the moiré pattern between the graphene and boron nitride at the bottom of the sandwich that is supposed to most strongly influence the electrons. But B. Andrei Bernevig, a theoretical physicist at Princeton University in New Jersey, and a co-author of another preprint proposing a mechanism for the FQAHE in the layered graphene6, urges caution about theory-based calculations, because they rely on currently unverified assumptions. He says that the moiré pattern probably matters, but less than it does in MoTe2.

For theorists, the uncertainty is exciting. “There are people who would say that everything has been seen in the quantum Hall effect,” Vishwanath says. But these experiments, especially the one using the layered graphene moiré, show that there are still more mysteries to uncover.

[ad_2]

Source Article Link