Categories
Life Style

Diabetes drug shows promise against Parkinson’s

[ad_1]

Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.

Viewers along the eclipse’s path in North America will watch the Moon cross the Sun’s face and block the solar disk, offering the chance to see its outer atmosphere by eye.Credit: Alan Dyer/VW Pics/UIG via Getty

On 8 April, researchers will get an unprecedented view of the Sun’s outer wispy atmosphere: the corona. The solar eclipse visible in parts of North America will coincide with a solar maximum — a period of extreme activity that occurs every 11 years. One research team will chase the eclipse from a jet, adding 90 more seconds of observation time to the maximum of 4 minutes and 30 seconds seen by observers on the ground. One question they’re hoping to answer: why the corona is so much hotter than the solar surface. That, says solar physicist James Klimchuk, is like walking away from a campfire — but finding that instead of cooling down, you get warmer.

Nature | 6 min read

A diabetes drug called lixisenatide has shown promise in slowing the progression of Parkinson’s disease. Lixisenatide is in the family of GLP-1 receptor agonists, such as Ozempic, that have made headlines as weight-loss drugs. In the latest clinical trial, lixisenatide was given to people with mild to moderate Parkinson’s who were already receiving a standard treatment for the condition. After a year they saw no worsening of their symptoms, unlike a control group whose condition did worsen. Further work is needed to reduce the drug’s side effects, such as nausea and vomiting, and to determine whether its benefits last. “We’re all cautious. There’s a long history of trying different things in Parkinson’s that ultimately didn’t work,” says neurologist David Standaert.

Nature | 4 min read

Reference: The New England Journal of Medicine paper

The Bill & Melinda Gates Foundation, one of the world’s top biomedical research funders, will from next year require grant holders to make their research publicly available as preprints, which are not peer reviewed. It will also no longer pay article-processing charges (APCs) to publishers in order to secure open access, in which the peer-reviewed version of the paper is free to read. The change follows criticism that APCs create inequities because of the costs they push onto researchers and funders. “We’ve become convinced that this money could be better spent elsewhere,” said a Gates representative.

Nature | 7 min read

Features & opinion

This week marks 30 years since the start of the 1994 genocide in Rwanda, in which members of the Hutu ethnic group killed an estimated 800,000 people from Tutsi communities. The event is now one of the most researched of its kind. These studies are difficult, not least because because the genocide almost wiped out Rwanda’s academic community. But efforts, especially by local researchers, are helping to inform responses to other violent crises and longer-term approaches to healing. Sociologist Assumpta Mugiraneza is leading challenging work that gathers testimonies from the genocide — and of the rich lives people had before the atrocity. To think about genocide, she says, “we must dare to seek humanity where humanity has been denied”.

Nature | 17 min read

Two future academics — a rat and a raven — ponder the fate of past primates in the latest short story for Nature’s Futures series.

Nature | 6 min read

Andrew Robinson’s pick of the top five science books to read this week includes an account of women working in nature and a thoughtful history of how our unequal society deals with epidemics.

Nature | 4 min read

When Brazilian biologist Fernanda Staniscuaski returned from parental leave, her grant applications started to be rejected because she “was not producing as much as my peers”. “Maybe I was never meant to be in science,” she recalls thinking. As the founder of the Parent in Science movement, she is now lobbying for greater acceptance of career breaks. As a first step, the Brazilian Ministry of Education has created a working group to develop a national policy for mothers in academia. “That was huge,” Staniscuaski says.

Nature Careers Working Scientist podcast | 20 min listen

Quote of the day

When the remains of the Australopithecus afarensis nicknamed ‘Lucy’ were discovered in 1974, we never could have predicted how rare such finds would be, says paleoanthropologist Bernard Wood. Nevertheless, the accumulated evidence of even older hominins has challenged Lucy’s status as ‘mother of us all’. (Science | 14 min read)

In our penguin-puzzle this week, Leif Penguinson is exploring a rock formation on the Barker Dam Trail in Joshua Tree National Park, California. Can you find the penguin?

The answer will be in Monday’s e-mail, all thanks to Briefing photo editor and penguin wrangler Tom Houghton.

This newsletter is always evolving — tell us what you think! Please send your feedback to [email protected].

Flora Graham, senior editor, Nature Briefing

With contributions by Katrina Krämer, Sarah Tomlin and Sara Phillips

Want more? Sign up to our other free Nature Briefing newsletters:

Nature Briefing: Anthropocene — climate change, biodiversity, sustainability and geoengineering

Nature Briefing: AI & Robotics — 100% written by humans, of course

Nature Briefing: Cancer — a weekly newsletter written with cancer researchers in mind

Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma

[ad_2]

Source Article Link

Categories
Featured

Soundcore’s new sleep earbuds promise noise isolation that’ll block out “sawing wood or grinding gravel”

[ad_1]

To mark World Sleep Day 2024, Anker‘s Soundcore has announced a new and improved set of sleep earbuds. The Sleep A20 true wireless earbuds are on their way, and come with a slew of improvements over their predecessor (2022’s Sleep A10 earbuds). The brand promises vastly improved noise isolation, longer battery life, and some new features that you don’t tend to find in the best earbuds, such as an alarm and sleep tracking via a companion app. 

From the outside, the A20s look similar to the A10s, with an ergonomic, side sleeper-friendly design. Inside, though, are new proprietary ‘Twin-Seal’ eartips that Soundcore says are three times more effective at blocking sound than traditional eartips. 

Soundcore Sleep A20 earbuds

(Image credit: Anker)

Although it’s not active noise cancellation, Soundcore says the seal is so effective it’ll even block sounds of “sawing wood, chopping logs or grinding gravel” on the other side of the bed. If your partner happens not to be into nocturnal DIY, the A20s will also take care of more typical late-night disturbances, such as snoring or passing traffic.

If you’re concerned, as you might well be, that such effective sound blocking might mean you miss your morning alarm, Soundcore has remedied that by adding an alarm function – also beneficial for couples operating on different sleep schedules, and who need a wakeup call that won’t disturb a happily snoozing bedmate. 

Soundcore Sleep A20 earbuds

(Image credit: Anker)

If you don’t want to sleep in complete silence, the A20s have an in-built library of ambient sounds and white noise that can be accessed while the buds are in their battery-efficient ‘sleep mode’. Alternatively, you can connect to Bluetooth to listen to your favorite sleep podcast, audiobook or Spotify playlist via your phone. Battery life has also been improved compared to the first-gen buds, with up to 14 hours playtime in Sleep mode or up to 10 hours when using Bluetooth.  

[ad_2]

Source Article Link

Categories
Entertainment

Chatbots promise a future that will never arrive

[ad_1]

Conversing with your computer has been a dream of futurists and technologists for decades. When you look at 2004’s state of the art, it’s staggering to see how far we’ve come. There are now billions of devices in our hands, and homes that listen to our queries and do their very best to answer them. But for all of the time, money and effort, chatbots of any stripe have not swallowed the world as their creators intended. They’re miraculous. They’re also boring. And it’s worth asking why.

Chatbot is a term covering a lot of systems, from voice assistants to AI and everything else in the middle. Talking to your computer in the not-so-good old days meant typing into a window and watching the machine attempt a facsimile of the act of conversation rather than the real thing. The old ELIZA (1964 to 1967) trick of restating user inputs in the form of a question helped sell this performance. And this continued even as far as 2001’s SmarterChild chatbot. The other branch of this work was to digitize the analog with voice-to-text engines, like Nuance’s frustrating but occasionally wonderful product.

In 2011, the ideas in that early work joined up to make Siri for the iPhone 4S, which was quietly built on Nuance’s work. Amazon founder, Jeff Bezos, saw Siri’s promise early and launched a large internal project to make a homegrown competitor. In 2014, Alexa arrived, with Cortana and Google Assistant following in subsequent years. Natural language computing was now available on countless smartphones and smart home devices.

Companies are largely reticent to be specific about the price of building new projects, but chat has been costly. Forbes reported in 2011 that buying the startup behind Siri cost Apple $200 million. In 2018, The Wall Street Journal quoted Dave Limp, who said Amazon’s Alexa team had more than 10,000 employees. A Business Insider story from 2022 suggested the company pegged more than $10 billion in losses on Alexa’s development. Last year, The Information claimed Apple is now spending a million dollars a day on AI development.

So, what do we use this costly technology for? Turning our smart bulbs on and off, playing music, answering the doorbell and maybe getting the sports scores. In the case of AI, perhaps getting poorly summarized web search results (or an image of human subjects with too many fingers.) You’re certainly not having much in the way of meaningful conversation or pulling vital data out of these things. Because in pretty much every case, its comprehension sucks and it struggles with the nuances of human speech. And this isn’t isolated. In 2021, Bloomberg reported on internal Amazon data saying up to a quarter of buyers stop using their Alexa unit entirely in the second week of owning one.

The oft-cited goal has been to make these platforms conversationally intelligent, answering your questions and responding to your commands. But while it can do some basic things pretty well, like mostly understanding when you ask it to turn your lights down, everything else isn’t so smooth. Natural language tricks users into thinking the systems are more sophisticated than they actually are. So when it comes time to ask a complex question, you’re more likely to get the first few lines of a wikipedia page, eroding any faith in their ability to do more than play music or crank the thermostat.

The assumption is that generative AIs bolted onto these natural language interfaces will solve all of the issues presently associated with voice. And yes, on one hand, these systems will be better at pantomiming a realistic conversation and trying to give you what you ask for. But, on the other hand, when you actually look at what comes out the other side, it’s often gibberish. These systems are making gestures toward surface level interactions but can’t do anything more substantive. Don’t forget when Sports Illustrated tried to use AI-generated content that boldly claimed volleyball could be “tricky to get into, especially without an actual ball to practice with.” No wonder so many of these systems are, as Bloomberg reported last year, propped up by underpaid human labor.

Of course, the form’s boosters will suggest it’s early days and, like OpenAI CEO Sam Altman has said recently, we still need billions of dollars in more chip research and development. But that makes a mockery of the decades of development and billions of dollars already spent to get where we are today. But it’s not just cash or chips that’s the issue: Last year, The New York Times reported the power demands of AI alone could skyrocket to as much as 134 terawatt hours per year by 2027. Given the urgent need to curb power consumption and make things more efficient, it doesn’t bode well for either the future of its development or our planet.

We’ve had 20 years of development, but chatbots still haven’t caught on in the ways we were told they would. At first, it was because they simply struggled to understand what we wanted, but even if that’s solved, would we suddenly embrace them? After all, the underlying problem remains: We simply don’t trust these platforms, both because we have no faith in their ability to do what we ask them to and because of the motivations of their creators.

One of the most enduring examples of natural language computing in fiction, and one often cited by real-world makers, is the computer from Star Trek: The Next Generation. But even there, with a voice assistant that seems to possess something close to general intelligence, it’s not trusted to run the ship on its own. A crew member still sits at every station, carrying out the orders of the captain and generally performing the mission. Even in a future so advanced it’s free of material need, beings still crave the sensation of control.


Engadget 20th anniversary bannerEngadget 20th anniversary banner

To celebrate Engadget’s 20th anniversary, we’re taking a look back at the products and services that have changed the industry since March 2, 2004.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

[ad_2]

Source Article Link